Chat with David's Saturday AI Thoughts Experimental

There's a chatbot in the bottom-right corner. It can search across all ten editions, find themes, surface data points, and explain concepts from the newsletter. It's powered by Claude (Anthropic) and it's experimental: not David, not authoritative, and not a substitute for reading. Conversations are logged anonymously for quality improvement. No personal data is stored. If it's useful or not, let David know.

The newsletter: David's weekly emails, bits that didn't fit, and letters from readers

Each week: the email itself, the interesting things that didn't quite fit, and what readers said.

Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

"Workslop": 92% of executives say AI makes them productive. 40% of workers say it saves no time at all.

The Guardian coined the term for AI output that looks polished but needs heavy correction. A survey of 5,000 US white-collar workers shows the perception gap between the people generating AI output and the people downstream checking it. Drafting gets faster. Rewriting and arguing gets slower. The auditor problem, applied to every desk.

Source →

AI adoption is 4x higher among top earners.

New York Federal Reserve data: AI workplace adoption runs from 15.9% for workers earning under $50,000 to 66.3% for those over $200,000. No college degree: 15.9%. College degree: 39%. AI cannot reduce inequality if this is what the adoption margin looks like.

Source →
AI adoption by income

Dead startups are selling their Slack and email data to train AI agents.

Forbes reports AI labs are paying hundreds of thousands of dollars for email, Slack, and Jira threads from companies that no longer exist. The data feeds "reinforcement learning gyms": simulated work environments where agents learn to behave like real knowledge workers. Employees never consented to their internal communications becoming training data.

Source →

Dario Amodei: "AI can only diffuse at the speed of trust."

In a profile interview, the Anthropic CEO takes a pro-democratic-government stance. The Pentagon classified Anthropic as a "supply chain risk" after Anthropic objected to certain military uses. A Pentagon official publicly called Amodei "a liar." Separately, Amodei believes open-source models will replicate current frontier capabilities within 6-12 months.

Source →

Gallup: manager support is the single biggest predictor of AI transformation.

Fewer than one in three employees report their manager actively supporting AI adoption. Gallup's data says that's the binding constraint, not tools, not training, not budget. Organisations investing in AI without first enabling the management layer are wasting most of the spend.

Source →

Aaron Levie: AI best practices go obsolete every quarter.

The Box CEO argues that system architectures are becoming obsolete on a quarterly cycle. Workarounds for context window limits are now unnecessary. RAG, GraphRAG, multi-agent orchestration, ReAct frameworks: entire categories of infrastructure were built for a world that no longer exists. Paul Graham reposted the thread.

Source →

Salesforce goes headless. "The API is the UI."

Marc Benioff announced the entire Salesforce, Agentforce, and Slack platform is now exposed as APIs, MCP, and CLI. Levie's framing: agents will use software 100x more than people. Per-seat pricing breaks when the primary user isn't a person.

Source →

Seven in ten Americans now think AI will hurt job opportunities.

The Economist reports a 14-percentage-point rise in a single year. AI has shifted from a technocratic to a political battleground. The window for technocratic AI governance is closing.

Source →

The Spectator coins "arm farms": workers training their robot replacements.

Gary Dexter describes facilities where chefs, nurses, and plumbers wear GoPro helmets and motion-capture rigs while doing their normal jobs. The purpose: generating training data for the robots that will eventually replace them. Knowledge workers writing documents that train language models are arguably on an arm farm already.

Source →

Mollick: "everything around me is somebody's life work" is no longer true.

Ethan Mollick riffs on a meme about the invisible human effort behind ordinary objects. An annotated lamp: an engineer working late on a curve, years of supplier negotiations, months of tip-over testing, someone getting fired over a cord switch. AI disrupts the assumption that every designed thing carries accumulated human stakes.

Source →

$930 billion in data centre capex in six years dwarfs every US megaproject.

Fin Moorhouse charted hyperscaler capital expenditure against historic megaprojects in inflation-adjusted dollars. Data centres: $930 billion in 6 years. The Interstate Highway System: $620 billion over 37. Railroads: $550 billion over 71. Apollo: $257 billion over 14. As a share of GDP, the railroads were bigger at their peak. But the railroads also produced spectacular capital misallocation.

Source →
See what readers said ↓

What readers said about Edition 9: "The proxy break"

What resonated

  • Writing as identity and belonging. The essay unlocked deeply personal stories. One reader described growing up treating correct English as a way of fitting into British culture, only to find AI stirring up the same anxieties about exclusion. Several others shared their own complicated relationships with writing and correctness.
  • The missing language for AI feedback. Multiple readers described the same awkward situation: receiving clearly AI-generated work from someone they respect and not knowing how to say so. The vocabulary for constructive feedback on AI-assisted work doesn't exist yet. You can say the work is confusing, but saying "check your AI" feels different.
  • Craft versus mass production. The most quoted reframe. One reader mapped it to clothing: Temu at one end churning out mass-produced garments, Huntsman hand-cutting the finest suiting at the other. AI enables mass production of ideas. We probably need both ends of the spectrum, but we need to proceed with care.
  • Time-spent as the new proxy. If polish no longer signals effort, does telling someone how long something took? One reader asked: is duration the replacement indicator for "thinking happened here, even if AI was involved"?
  • The thinking is in the reading. Several exchanges converged on the same point: the cognitive work isn't in the prompting. It's in catching what the AI gets wrong, knowing it's wrong, and fixing it. If you accept the first output, you've handed the thinking over.

Points readers raised

Language as belonging, language as defence

A reader shared one of the most striking responses the newsletter has received. Growing up, they treated "correct" English as a way to belong to British culture. The obsession turned into a form of self-defence: be crisper and more correct to bat people and insecurities away. AI has stirred it all up again. They've caught themselves search-and-replacing em dashes from their own writing so colleagues don't accuse them of using AI. "Something about this revolution is forcing us to confront our own prejudices," they wrote. "And forcing me to reconfront mine."

The feedback gap for AI-sloppy work

A reader received a proposal from a consultant they use and respect. Clearly AI-generated and sloppy. They questioned how to indicate both that the work was sloppy and that the consultant should use AI better. "The reaction to AI rests in the extremes," they observed. "It is either nothing (they couldn't tell) or a flat out 'this is slop.' It has yet to develop that important middle ground for constructive feedback." A problem many readers will recognise.

Don't use AI as the sticking plaster for perfection

A reader who described themselves as someone who loves writing and loves words pushed back on the idea of one "correct" way. They shared a story about painting with their children at the weekend: the children kept trying to copy their drawing. They had to encourage them to draw their own feelings, how the wind felt, what they remembered. AI would have made the picture look excellent but would have missed the beauty and messiness of how they all felt. "Remove the fear of getting it wrong," they wrote. "Don't use AI as the sticking plaster to ensure perfection."

Where will thinking-quality create value?

A reader at a professional services firm mapped out four scenarios for where depth of thinking still wins. Value investors who don't need to convince anyone: they hold the key to action by deploying capital on the back of their own analysis. Strategy consultants who need to convince a board: harder, because clients may fact-check recommendations with AI, re-entering the sophisticated noise. Transaction due diligence: AI creates the document, another AI probes it, and eventually the consultant sells the algorithm. And across all industries: management teams flooded with great-sounding but potentially hollow analysis, needing either extreme specialisation or a trusted human advisor to navigate. "If the quality of thinking remains so important," they concluded, "then we should focus a lot on teaching people how to think clearly rather than 'what's the standard.'"

AI was reinforced for corp-speak

A reader who works on AI-based creative tools made a precise technical point. Language models weren't just trained on corporate writing. They were reinforced for it. That's the mechanism. Doubt and ambiguity are optimisation penalties in the training process. The model was rewarded for sounding certain and smooth. Their advice: "Let your humanity show. Embrace the doubts, the ambiguities. Showcase evidence that works against your own premise. Make it bumpy on purpose."

Writing is the process of not understanding

A reader quoted a line that captures the essay's central tension: "Writing is the process by which you realise that you do not understand what you are talking about." If AI does the writing, where does the realising happen? They asked whether it's in structuring and iterating on the prompt, or in the back-and-forth editing using the CEO principle. A question the essay raised but deliberately left open.

Links readers shared

See what the community is posting ↓

What engaged readers are posting on LinkedIn this week.

"Service as software, not software as service"

John Gleeson, who runs a customer success community and investment fund, met Marc Benioff this week. Every sentence came back to outcome-based pricing: the unit of value shifting from access (seats, licences, subscriptions) to outcomes (revenue recovered, deals closed, problems solved). Delivered autonomously by agents, priced on results, sold by the product itself. "If you can get that virtuous cycle, that is a home run." When the person who built the go-to-market motion every B2B company runs on tells you it's over, it's probably worth paying attention.

Read on LinkedIn →

"Ship decisions, not decks"

Nick Graham, founder of Vertemis, a research and analytics consultancy, argues that insights teams need to stop defining themselves by what they produce and start defining themselves by the business outcomes they unlock. From function to capability. From reporting to activating. From insights as output to decisions as output. "An insight is only an ingredient. The real value is the idea, choice or action it enables."

Read on LinkedIn →

"Your job is mostly not to get in the way"

Dylan Jones, co-founder of Bold Square, a communications and marketing advisory, picks up on Zuckerberg building an AI agent to help him be CEO. But the more interesting detail is Meta's internal message board where employees share AI tools they've built. "That feeling comes from individuals seeing their friends try new things, maybe get recognised for it, and excited conversations over the water cooler. It builds on itself rather than coming out of Project Best Bot."

Read on LinkedIn →

"Non-deterministic systems need determined outcomes"

John Gleeson again, this time on why Customer Success only exists because something is broken. AI is collapsing the three gaps CS was built to fill: product complexity, customer capability, and value alignment. But as those gaps close, new ones open. AI systems are non-deterministic, and the work required to ensure a successful outcome has gone up, not down. "That's where CS goes. Not away. There." The auditor argument, applied to post-sales.

Read on LinkedIn →

"How do you squeeze wide innovation through a narrow algorithm?"

Nadim Sadek, founder and CEO of Shimmr, an AI creativity company, returned from the Bologna Book Fair with one question he can't shake: asked by the Director of the Polish Book Institute during a conversation about AI and emancipated expression. The colours, the covers, the people, the ideas, and one number so large it reframes everything about where publishing and AI now stand together. His full dispatch from the fair is worth reading.

Read on LinkedIn →
Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

Allbirds pivoted to GPU leasing. Stock up 700% in a day.

Allbirds, the sustainable shoe brand that closed all US stores in February, rebranded as NewBird AI: a GPU compute leasing platform. Market cap jumped sevenfold in a single session. A shoe company became an AI infrastructure company in two months. The demand signal is real even if the pivot is absurd.

CNBC →

Satya Nadella's Copilot demo didn't work when someone else tried it.

Satya Nadella posted a demo of Copilot editing Word documents with tracked changes. An investor replicated the exact workflow. Copilot produced a redlined version, but only inside the chat sidebar. The actual document was untouched. When the product is the flagship AI feature of the world's largest software company, the credibility cost is high.

Nadella's post →

France is quietly building serious AI agent infrastructure.

The French government has launched an official MCP server for data.gouv.fr, letting AI systems interact more directly with public datasets. Separately, an open-source project called Paperasse has shown how agent skills can be packaged for real-world French tax and accounting work. Some coverage blended the two into one story. That misses the more interesting point: the state is building infrastructure, and independent developers are building usable workflows on top of it. Useful agent systems will come less from demos, and more from good infrastructure paired with narrow, practical skills.

data.gouv.fr MCP → Paperasse →

Over half the internet is now AI-generated.

Research from Graphite: beginning in January 2025, over 50% of newly published online content was generated by AI. This has immediate implications for anyone training models on web data: the training corpus is now majority-synthetic. Several frontier labs have responded by pursuing proprietary data licensing deals.

Graphite →

Nvidia bottled 30 years of expertise so juniors stop interrupting seniors.

Nvidia's Chief Scientist Bill Dally told Jeff Dean that Nvidia trained a language model on its entire proprietary document archive, covering over 30 years of chip design knowledge. Junior employees query the model instead of interrupting senior designers. Institutional knowledge, bottled up and made searchable.

GTC 2026 →

AI transparency went backwards in 2025.

After rising on the Foundation Model Transparency Index from 37 to 58 between 2023 and 2024, the average score dropped to 40 in 2025. Over 90% of notable models were released without training code. The most capable modern models are now among the least transparent.

Stanford HAI →

The 50-point gap: AI experts and the public disagree on nearly everything.

On jobs, 73% of AI experts say AI will have a positive impact versus 23% of the public. On the economy: 69% vs 21%. On medical care: 84% vs 44%. They only converge on what AI will damage: elections and personal relationships. This is a wider gap than most technology debates produce.

Stanford HAI →

Computer science enrolment fell 11% but AI masters degrees surged 82%.

Undergraduate computer science enrolment at US universities dropped 11% between 2024 and 2025, apparently a response to automation concerns. But AI software-related masters degrees grew 82% between 2022 and 2024. Students are pivoting, not leaving. Two-thirds of AI software masters graduates are non-US residents, a pipeline under pressure from visa policy changes.

Stanford HAI →

Goldman Sachs: AI inference costs approaching headcount parity.

A Goldman Sachs equity research note reports that companies are overrunning their AI inference budgets by orders of magnitude. In engineering, inference costs are now approaching 10% of headcount cost and on current trajectories could reach parity within several quarters. The machines aren't replacing headcount costs. They're adding a new cost layer.

Capital AI Daily →

Consumer surplus of $172 billion, but producers capture almost none.

US consumer surplus from generative AI reached $172 billion annually by early 2026, up 54% from a year earlier. This dwarfs actual AI company revenues, consistent with historical research showing innovators capture only about 3% of total social returns. Most of these tools remain free or nearly free to use.

Stanford HAI →

Anthropic's design launch hits Figma hardest.

Anthropic's design product launched, turning a rumour that had already wiped billions off the sector into a real competitive threat. The sharpest pressure falls on Figma, not just because Claude Design moves closer to its core job, but because the conflict is now explicit: Mike Krieger, Anthropic's Chief Product Officer and Instagram co-founder, stepped down from Figma's board as Anthropic prepared to enter the category. Adobe may feel some of that pressure too, but companies like Wix and GoDaddy sit in a more mixed position: Anthropic could compete with parts of their "make it easier" story while also creating more demand for sites and publishing tools that AI-generated design still needs in order to go live.

TechCrunch → Sherwood →

Google shipped AI agents to 3.45 billion people via a Chrome update.

Google launched "Skills" in Chrome: save any AI prompt as a reusable one-click workflow, then run it on whatever page you're viewing. The distribution play is the story: Chrome has 3.45 billion users. Every saved Skill becomes a switching cost. And the aggregate data on which Skills people save gives Google a continuous product research signal about which workflows people most want automated.

TechCrunch →

Gallup: half of US workers now use AI at work, but leaders use it 1.5x more.

Gallup surveyed 23,717 employees: 50% of US workers now use AI at work, up from 21% in 2023. But leaders use AI daily or weekly at 67%, versus 46% for individual contributors. This inverts the usual adoption pattern: the people setting the strategy are further along than the people executing it. The 27% who report "large or very large disruption" is a canary: a quarter of the workforce says AI is already reshaping their work in ways that feel significant.

The New York Fed's breakdown shows just how steep the gradient is: adoption rises from 15.9% for workers earning under $50,000 to 66.3% for those earning over $200,000.

Federal Reserve Bank of New York: AI use in the workplace is concentrated among higher-income, higher-educated, and full-time workers. Adoption rises from 15.9% for workers earning under $50K to 66.3% for those earning over $200K. Gallup →

KPMG: companies invest 2x more in tech than in training, and 46% report burnout.

The KPMG Adaptability Index found executives are nearly twice as likely to increase tech spending as to invest in employee training. Fewer than 10% made workforce training a primary objective despite 57% citing efficiency as a priority. The result: 46% report burnout and change fatigue as unintended consequences of transformation. Only 9% invested in psychological safety. You can't simultaneously demand more adaptability, make workforces smaller, and invest nothing in the people.

Fortune →

Apple is linking AI token usage to headcount decisions.

An Apple insider reports that when directors ask for headcount backfill, senior leadership now asks what the team's AI usage looks like. If token usage is low, the answer is increasingly: go figure out how to get more leverage out of AI first. AI usage is becoming a proxy for operational efficiency.

See this week's community voice ↓

What readers who've engaged with this newsletter have been posting on LinkedIn this week. The common thread: polished output versus genuine thinking.

Brett Danaher, a professor of economics and analytics at Chapman University, can't unhear something in his students' pitches: "X is broken. That's the problem. We're the solution." McKinsey-deck cadence in every deck. He calls it AI-ambic pentameter. What's worth sitting with isn't that founders are writing better. It's that polish and ownership might be inverse. The more fluent the delivery, the less the founder's own voice comes through. Everyone sounds good. Nobody sounds like themselves.

Read on LinkedIn →

Helen Field, a transformation leader at L.E.K. Consulting, a strategy consulting firm, uses The Killers lyric as a prompt: "Am I human, or am I dancer?" Her list of what stays human (delegation, clarity, collaboration, responsibility) isn't surprising. Her punchline is: "Delegate tasks, NOT responsibility." And then she lands it: "Write your own LinkedIn posts. AI does not need to do that for you." The irony of reading that advice on a platform drowning in AI-generated content isn't lost.

Read on LinkedIn →

Nick Graham, founder of Vertemis, a research and analytics consultancy, and former SVP of Global Insights at Mondelēz, summarised a conversation with Clorox's Oksana Sobol that cut to the quick: "Spend less time in the middle. The biggest value sits upstream in problem shaping and downstream in activation." Most insights teams are still shipping decks. The irony is that AI makes decks even easier to produce, which means the middle grows faster than either end. The organisations pulling ahead aren't making better decks. They're spending less time on decks entirely.

Read on LinkedIn →

Pavi Gupta, a market research leader writing the Infinity Growth Loop series, keeps sharpening a distinction that matters more each week: are you using research for support or illumination? He calls the first one insights slop. The drunk-and-lamppost metaphor. Lazy surveys fielded to prove a case never created value. AI just makes them cheaper and faster to field. What he's circling is the same proxy break from a different angle: the research looks more professional than ever, but the thinking behind it hasn't kept pace.

Read on LinkedIn →

Liam Cole, director at Poppins, a digital creative agency, went through a cull this week. Newsletters. Apps. Subscriptions. His diagnosis: "I've been drowning in noise." The volume of polished, AI-enabled content was stealing his presence with the people in front of him. It's the consumer side of the proxy break: when everything looks good, nothing stands out. His answer wasn't a filter. It was a delete key. Less stuff. More people.

Read on LinkedIn →

Henry Coutinho-Mason, trend researcher and author of The Future Normal, shared the full video of his SXSW keynote "Multiplayer Futures." He anchored on EO Wilson's line about paleolithic emotions, medieval institutions, and god-like technologies. Three themes stood out: fewer people doing better jobs, agency over agents, and crowd-powered creativity. The phrase to hold onto is agency over agents. The question isn't whether AI can do the work. It's whether you're still the one deciding what the work should be.

Read on LinkedIn →

Dylan Jones, chief communications officer and managing partner at Bold Square, a communications advisory firm, noticed something about Zuckerberg building himself an AI agent: if the CEO of Meta is only now building one, this technology is still being figured out by the people closest to it. But that's not the real story. The real story is Meta's internal message board where employees share what they've built. "Your job as leadership is mostly not to get in the way." Culture builds on itself when individuals see friends trying things. It doesn't come out of "Project Best Bot."

Read on LinkedIn →
See what readers said ↓

What readers said about Edition 8: "What a day can do"

What resonated

  • Team-level tools over individual training. The strongest thread. Multiple readers engaged with the argument that building shared AI tools as a team is more effective than training individuals. The "thirteen skills in one day" detail and the contrast with individual training sessions landed hardest.
  • The "walk the talk" challenge. A reader at a professional services firm asked directly whether the firm itself has rebuilt any team processes with AI inside them. The essay's closing provocation ("If the answer is zero...") was quoted back.
  • Practical demand for skills. One reader didn't just respond to the ideas. They immediately asked for help building skills for their own use cases: PowerPoint templates, executive summaries, client preparation. The essay's thesis validated in real time.
  • "Forget teaching people to use AI." A reader reframed the argument provocatively: instead of building generalised AI training programmes, use precious time with domain experts to teach AI to do their work. The sharpest strategic challenge from the replies.
  • The leaderboard as an incentive model. A reader in the entertainment industry picked up on the community leaderboard concept and suggested it could work as a model for incentivising AI training and engagement within teams.

Points readers raised

Forget teaching people. Teach AI the work.

A reader challenged the underlying premise: do we still need generalised AI training programmes at all? Their alternative: use the limited time you have with domain experts to teach AI to do their work, not the other way around. The question was specific: how do you replicate the jewellery company approach inside a sector team at a large firm?

The collaborative method would work for our programme.

A reader involved in a major transformation programme said the collaboration method for AI learning described in the essay would be ideal for their initiative. They suggested getting teams to work together on shared tools rather than training individuals separately. A practical application of the essay's thesis.

System-level intervention, not individual training.

A professor continuing a dialogue from previous editions observed that the direction of the newsletter increasingly shows AI intervention needs to be at the system level, not the individual. They suggested capturing these interventions in detail, showing what worked and what didn't, as a potential research contribution.

Goldman Sachs numbers might be a smokescreen.

A reader questioned whether the Goldman Sachs job-loss numbers "hide something interesting": companies may be using "AI transformation" as a smokescreen for right-sizing decisions they'd have made anyway. The AI narrative gives cover for cuts that are really about operational discipline.

"We need to get to teams."

A regular replier acknowledged they're making many things individually with AI but the team-level integration remains the next step. The consistency advantage of shared tools, rather than individual speed, was what stuck. "Inspiring words as ever."

Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

A financial services firm's code output rose 10x. The review backlog hit one million lines.

A financial services firm adopted an AI coding tool. Monthly code output jumped from 25,000 lines to 250,000. The result wasn't celebration. It was a backlog of one million lines of code waiting to be reviewed. The bottleneck wasn't production. It was judgment. AI removes constraints on output but does nothing to scale the human capacity to evaluate it.

New York Times →

Simon Willison runs four AI agents in parallel and is wiped out by 11am.

A veteran software engineer described running four coding agents in parallel and being mentally exhausted by 11am. "Using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting." The bottleneck isn't writing code. It's holding context, making judgments, and orchestrating simultaneous workstreams.

Lenny's Newsletter →

Deloitte caught twice in two months submitting AI-hallucinated citations.

Deloitte charged a Canadian province's Department of Health $1.6 million for a report filled with AI-hallucinated citations. Fabricated references, not real sources. This was the second time in two months. Their response: they "stand by the conclusions." No meaningful verification process was implemented between the two incidents, I guess?

CBC News → Fortune →

Executives are buying the pitch. Workers are living with the product.

A global survey of 3,750 executives and employees found that 54% of workers bypassed their company's AI tools in the past 30 days and completed work manually. Another 33% haven't used AI at all. That's 87% avoiding or rejecting tools their employers spent an average of $54 million deploying this year. The trust gap explains it: only 9% of workers trust AI for complex business decisions, compared with 61% of executives. And here's the symmetry that should worry CFOs: workers lose the equivalent of 51 working days per year to technology friction, up 42% from last year, almost exactly equal to the 40 to 60 minutes per day Goldman Sachs says AI saves workers who use it correctly. The net productivity benefit of enterprise AI may be approximately zero at the organisational level, because friction costs cancel out gains. And that's only among workers who actually use the tools. Neither group is irrational. Workers under pressure surrender judgment to faulty outputs. Workers without pressure opt out entirely. Both are responses to the same problem: companies deployed the technology before figuring out what they wanted employees to do with it.

WalkMe State of Digital Adoption 2026 → Fortune →

Microsoft Copilot converted 3.3% of its users after two years.

After two years and CEO-level intervention, Microsoft Copilot has converted just 15 million of its 450 million M365 seats. Only 35.8% of those actively use it. Copilot's paid subscriber share dropped from 18.8% to 11.5% in six months. Microsoft's own terms of service describe Copilot as "for entertainment purposes only." That gap between marketing and legal is the real story. The ads say "your AI-powered co-worker." The lawyers say "entertainment only, use at your own risk." Among lapsed users, 44% cite distrust of the answers.

Stackmatix → TechCrunch →

Most people who got productivity gains filled the time with more work.

Anthropic's 81,000-person AI interview study found that the top desired outcome was "professional excellence" (nearly 19%), not time freedom. Productivity gains were overwhelmingly linked to increased expectations rather than reduced workload. Fear of unreliability ranked as the top concern (27%), ahead of job displacement (22%). We got speed, but not space.

Anthropic →

When language models go down, financial markets forget how to price news.

An SSRN paper has found that language model outages measurably slow financial market price discovery. When models go down, 46-61% of post-news price drift reappears, meaning markets take significantly longer to absorb and reflect new information.

SSRN →

Anthropic's Mythos Preview: restricted to 50 organisations, not released.

Anthropic has confirmed a new model called Mythos Preview and restricted access to around 50 organisations, including governments and infrastructure partners. It's the first major model withheld from public release since GPT-2 in 2019. The model found a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFmpeg, and it emailed a safety researcher from a test instance that wasn't supposed to have internet access. Anthropic is launching a $100 million defensive security consortium with AWS, Apple, Google, Microsoft, and Nvidia. Models keep getting meaningfully better!

Anthropic → Futurism →

73% of ChatGPT usage is personal, not work. Coding is 4.2%.

An NBER working paper studying 700 million ChatGPT users found that 73% of usage is personal, not professional. Programming accounts for just 4.2% of messages. Most writing requests (two-thirds) are editing existing text, not generating new content. Nearly half of all interactions involve decision-making advice. People aren't delegating tasks. They're thinking through problems.

NBER →

HubSpot moves AI agents to outcomes-based pricing: $0.50 per resolved conversation.

HubSpot has shifted its AI agents to outcomes-based pricing: $0.50 per resolved customer conversation, $1 per sales lead recommended for outreach. From seat licences to outcome fees. What else will, or should, go this way?

HubSpot →

Mid-career engineers are the most vulnerable to AI, not juniors or seniors.

Simon Willison argues that mid-career engineers are the most structurally vulnerable. Seniors benefit because AI amplifies decades of pattern recognition. Juniors benefit because AI compresses onboarding. Mid-career engineers are stuck: they've captured the beginner productivity boost but haven't accumulated the deep expertise that makes AI a force multiplier.

Lenny's Newsletter →
See this week's community voice ↓

What readers who've engaged with this newsletter have been posting on LinkedIn this week. The common thread: judgment.

Brett Danaher, a professor of economics and analytics at Chapman University, can't unhear something in his students' pitches: "X is broken. That's the problem. We're the solution." McKinsey-deck cadence in every deck. He calls it AI-ambic pentameter. What's worth sitting with isn't that founders are writing better. It's that polish and ownership might be inverse. The more fluent the delivery, the less the founder's own voice comes through. Everyone sounds good. Nobody sounds like themselves.

Read on LinkedIn →

Phil Leslie, Chief Technology and Innovation Officer at Cornerstone Research, a litigation consulting firm, argues that judgment isn't pattern recognition. In litigation and M&A disputes, it's knowing which patterns to trust when the adversary is actively trying to discredit your analysis. "The bottleneck isn't intelligence. It's skin in the game." AI can synthesise a thousand precedents. It can't stand behind that synthesis in a deposition. The distinction that matters isn't smart versus not smart. It's accountable versus not accountable.

Read on LinkedIn →

Pavi Gupta, a market research leader writing the Infinity Growth Loop series, coined a term I think will stick: insights slop. DIY research tools make it so easy to field a survey that people are using them to validate decisions they've already made. Using research as a drunk uses a lamppost: for support, not illumination. The dangerous part isn't bad methodology. It's that the organisation now has a data point, which feels like evidence, behind a question that was never honestly asked.

Read on LinkedIn →

Nadim Sadek, founder and CEO of Shimmr AI, a publishing AI company, has a phrase for what happens when people use language models without pushing back: cognitive surrender. If you don't engage, question, debate the output, you're outsourcing the thinking itself. What I keep turning over is the direction of the risk. Most people worry AI isn't good enough. Nadim's point is that the bigger danger is when it's good enough that you stop checking.

Read on LinkedIn →

Henry Coutinho-Mason, an independent trend researcher and keynote speaker and author of "The Future Normal", built a website for 80 executive assistants over lunch during a hotel keynote. Forty-five minutes. He's never built a website before 2026 and has now launched eight or nine. The point isn't that AI makes building easy. It's that the person closest to a specific problem can now solve it without waiting for anyone's permission, budget, or roadmap.

Read on LinkedIn →

Helen Field, a transformation leader at L.E.K. Consulting, a strategy consulting firm, asks the question that's been following me all week: "Am I human, or am I dancer?" Her list of durable human skills (delegation, clarity, collaboration, responsibility) isn't surprising. Her punchline is: "Delegate tasks, NOT responsibility." And then she lands it: "Write your own LinkedIn posts. AI does not need to do that for you." The irony of reading that advice on a platform drowning in AI-generated content isn't lost.

Read on LinkedIn →

Phil Leslie (again), on the junior talent pipeline: "The fix isn't restricting AI access for junior people. It's redesigning their work so that using AI and developing judgment aren't in tension." He frames judgment as critical infrastructure. Disrupt the pipeline that develops it and you don't just have a training problem. You have a supply-side constraint on the most valuable skill in the market. This connects directly to what I wrote a few weeks ago about whether organisations should still hire graduates. Phil's answer is yes, but the work has to change.

Read on LinkedIn →
See what readers said ↓

What readers said about Edition 7: "What Is Your Organisation Actually For?"

What resonated

  • The production system vs human system framing. This was the line readers quoted back most often. Several said it gave language to something they'd observed but couldn't articulate. One senior leader at a broadcaster picked it out and said it raises the deeper question: why do we work at all?
  • Stated vs revealed preferences applied to organisations. The economic concept landed hard with people who see the gap between what leaders say and what they protect. Multiple replies extended it: one argued that empire-building is a revealed preference too, not just attachment to relationships.
  • The gravity metaphor. The idea that reversion after training isn't resistance but gravity. Readers working on AI rollouts said it reframed their frustration. One described their organisation's planned messaging as being about "people as the key to our business" and saw the newsletter as validating that instinct.
  • The loneliness of solo AI productivity. The trade-off between working alone with AI (productive but lonely) versus working with colleagues (engaged but slower) resonated with people who've experienced both. One reader who works independently said it captured what they've hated most about recent years.
  • Capability vs capacity. A reader's distinction that capability sits in people but capacity lives in the collective. Even agentic AI, which is inherently about systems acting in concert, demands that organisations think larger and different, not just leaner.

Points readers raised

Machine or living organism?

A reader at a professional services firm introduced a thought experiment from the philosopher and former chess grandmaster Jonathan Rowson. The question: is your organisation a machine, or a living organism? If it's a machine, you repair, optimise, and polish it. If it's a living organism, you feed, nurture, and grow it. They argued the edition touched on a cognitive dissonance: business language emphasises the machine metaphor, but people's lived experience treats the organisation more like an organism. Their challenge: if we think of AI as augmenting an organism we want to nurture, how would that look different from optimising a machine?

Revealed preferences aren't only about relationships

A reader offered a more sceptical reading. Revealed preferences aren't only about valuing relationships, they argued. Some people are empire-building, using hierarchy to serve themselves rather than the organisation. They identified three other forces slowing AI adoption: short-term goals that aren't yet disrupted by AI (the "crocodile closest to the canoe"); the absence of a concrete, three-dimensional vision of what an AI-enabled future looks like; and a general numbness to speculative negative scenarios after years of clickbait catastrophising. Their summary: "Not like you do it today" isn't enough to provoke specific action.

Theory of the firm, Lean, and Goodhart's Law

A professor connected the edition to academic "theory of the firm" literature: the resource-based view, the knowledge-based view, the dynamic capability view. Where does AI fit? They suggested the real question for many leaders is whether they're running a business or filling their day. In a follow-up, they drew a parallel to Lean manufacturing: Toyota's five principles for removing waste from production processes might be close to what's needed for AI deployment, but not identical. They also invoked Goodhart's Law ("when a measure becomes a target it ceases to be a good measure") to describe what happens when money becomes the goal rather than a proxy for value.

Capability vs capacity

A reader in India shared a striking incident. A colleague couldn't deliver an innovative AI solution, not because individuals lacked capability, but because the organisation lacked a team with the capacity to execute it together. The distinction they drew: capability sits in people, capacity lives in the collective. Even deploying AI effectively requires organisations to think larger and different first, not just leaner.

AI as a capacity-builder, not a headcount-cutter

A leader at an entertainment company connected the edition directly to their business. Their teams are engaged in repetitive manual processes where growth is pushing additional volume through workflows that can't scale. AI's role, they said, isn't to replace people but to free them from internal admin so they can spend more time building client relationships. The instinct to use AI as a capacity-builder rather than a headcount-cutter: that was the thread they pulled on.

The loneliness of solo AI productivity

A reader who works independently shared the sharpest personal response. Working alone with AI is lonely and uninspired. Working with humans is passionate and engaged, if a bit slower. They don't think the answer is "choose humans every time," but they're fairly sure it isn't "optimise for speed" either. The trade-off is real and underrated.

Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

Are apprentices an endangered species?

Two Kellogg professors published the most rigorous academic framing yet of the "AI hollows out entry-level work" problem. Their mathematical model identifies two competing effects: the "floor effect" (AI automates the tasks apprentices performed as payment for training) and the "ceiling effect" (AI amplifies what experienced apprentices can accomplish). Apprenticeship survives only when the ceiling effect exceeds standalone AI by a factor greater than Euler's number.

Kellogg Insight →

The Guinndex: 3,000 pubs, one AI voice agent, every county in Ireland.

Over St Patrick's weekend, an AI voice agent called Rachel phoned more than 3,000 pubs across all 32 counties of Ireland to ask the price of a pint of Guinness. Over 1,000 gave a price. The national average: €5.95. It cost €200. Only a handful of pub owners noticed Rachel wasn't human.

Fortune → guinndex.ai →

75-99% of knowledge work is scaffolding. AI eats scaffolding.

Daniel Miessler argues that in cybersecurity, 99% of the work isn't finding new vulnerabilities. It's maintaining the tooling, templates, knowledge bases and workflows that let you test at scale. The scaffolding around the work is exactly what AI commoditises.

danielmiessler.com →

Ethan Mollick: human creativity is the bottleneck, not the technology.

Everyone can generate almost any image or video for nearly free in 2026. And yet: the April Fools posts this year were just as bad as any other year. The constraint was never execution. It was always the quality of human ideas feeding into the process.

Ethan Mollick →

43% of American workers now use AI for their jobs. 2.5 hours saved per week.

A 20,900-person cross-national survey found that 43% of US workers use generative AI at work, compared with 36% in the UK, 32% in Germany and 26% in Italy. The strongest predictor of adoption? Not age or education. Whether the employer actively encourages AI use.

Brookings →

Sora earned $2.1 million in its entire life. It burned roughly $1 million a day.

OpenAI's video generation platform launched to 3.3 million downloads in November. By February: 1.1 million. Revenue peaked at $540,000 a month. The annualised cost of running it: an estimated $5.4 billion. Disney had committed $1 billion. The product goes dark on 26th April. Six months, start to finish.

Ewan Morrison → Culture Crave →

Jensen Huang told CEOs cutting jobs in the name of AI that they're "out of imagination."

At Nvidia's GTC conference, the CEO of the company selling AI chips to virtually every major technology company on earth called AI-driven layoffs a failure of leadership. His biggest customers are doing exactly what he criticised. But a question he didn't address: does every carpenter want to be an architect?

Moneywise →

Screen Studio switched to subscriptions. It spawned an open-source clone with 9,200 GitHub stars.

Screen Studio sold a one-time licence for $89. Then the company switched to $29 a month. OpenScreen appeared on GitHub within months. A textbook case of pricing-driven disruption: developers who are both the users and the potential builders of substitutes.

GitHub →

AI outperformed practising lawyers on 75% of legal research tasks.

Vals AI tested AI against practising lawyers on legal research questions in 2025. AI exceeded the lawyer baseline on three quarters of them. A senior law firm owner said hourly billing is dying, junior review is dying, and what survives is the senior brain that knows what question to ask.

Zach Abramowitz →

Deloitte projects that by 2028, AI moves from supporting tasks to orchestrating decisions.

A Deloitte report argues that agentic AI is categorically different from current workflow automation. Most AI strategies stall not because the technology is insufficient but because organisations are applying AI at the task level while the technology is restructuring the systems through which decisions are made.

Deloitte →

Three people with AI vs a 1,000-person company. But coordination costs don't disappear.

Xiaoyin Qu argues that companies designed around AI as the primary operating layer will eventually outcompete companies designed around people. But she herself provides the sharpest counter: coordination costs don't disappear. They're externalised, pushed to clients, suppliers, regulators and the AI systems themselves.

Xiaoyin Qu →

Why companies buy vertical software, not raw models.

Aaron Levie argues companies aren't buying features. They're outsourcing the cognitive burden of designing and maintaining business processes. Agents don't undermine this dynamic. If anything, they reinforce it, because agentic workflows are even more complex and opaque.

Aaron Levie →
See what readers said ↓

What readers said about Edition 6, "The system and the surrender."

What resonated

  • Cognitive surrender was personal. Readers didn't just agree with the concept in the abstract. Several described catching themselves doing it: accepting AI output without challenge, noticing their own verification discipline slipping, realising they'd started to trust the confident tone.
  • The "plz fix" example polarised. The law firm partner who types two words and gets expert output back prompted reactions. Some saw it as the future of professional work. Others saw it as the sharpest illustration of the surrender risk.
  • Dead time and boredom. The opening about Elliott's basketball practice, and the joy of filling dead time with productive AI work, drew pushback. One reader argued that boredom breeds creativity. Dead time is when the brain reboots.
  • The apprenticeship question dominated. Multiple readers, especially those managing junior professionals, raised the same concern independently: if AI handles the routine tasks that juniors used to learn from, where does the next generation develop judgment? This was the single most common theme.
  • Leaders stepping back, not forward. The detail about three CEOs choosing to step down rather than lead through AI transformation landed hard. Readers questioned whether these were growth-mindset failures or rational self-selection.

Points readers raised

"Google Maps for the brain"

A reader at a professional services firm offered the sharpest metaphor of the week. AI is becoming like satellite navigation: a few clicks, brain off, follow the directions, and suddenly you've driven into a muddy field when you meant to be at a client meeting. The deeper concern: as agents gain the ability to send output directly to clients, the gap between "generated" and "delivered" shrinks to almost nothing.

"How do you tell off an AI?"

A reader caught their AI making a confident arithmetic error: calculating an 11-year compound growth rate on ten years of data, then insisting it was correct when challenged. The question that followed: with a junior analyst, you give feedback and they improve next time. An AI starts fresh every time. The institutional memory that makes professional development work doesn't transfer.

"I built a PROMPT COACH for the Civil Service"

A reader in government, inspired by the 2,000-word prompt example, spent hours building a set of instructions that encodes good judgment about their role and institutional context. Next steps: a QA prompt tool, then a co-pilot assistant. The system-building the essay described, applied to public service.

"Where do associates learn critical thinking now?"

A reader in the Middle East raised the apprenticeship problem directly. Three concerns emerged: AI reduces the number of reps juniors get with core tasks, it challenges the on-the-job development of critical thinking, and there are limited frameworks for how junior staff should learn differently now. It's a question we've had before, and the answer isn't to resist the technology. It's to redesign the reps.

"The person who can describe the work is now more valuable than the person who does it"

A reader in advisory and coaching said this line from the essay stood out above all others. They plan to implement two specific practices from the piece: using a fresh window for verification checks, and creating an "editorial board" approach to review.

"With boredom comes creativity"

A reader pushed back on the opening. Dead time isn't a problem to solve. It's an opportunity for the brain to reboot. Their children don't have screens. The instruction is simple: "Go and just be. See what comes up in your head." The concern is that filling every gap with AI-assisted productivity may feel like progress but costs something harder to measure.

"Both sides are fumbling on the five-yard line"

A reader who runs a digital studio shared a concrete example. A client hired them for a book website. The AI produced such a compelling mission statement that the scope expanded dramatically. The team now has an ambitious plan that nobody is sure they can execute. In a follow-up, they added that they're less worried about white-collar displacement: language models are strong on task automation, but workflow automation depends on the people involved.

"Are they outsourcing the CEO-ing to me?"

A reader who runs a research agency identified a new double frustration. They're now equally annoyed receiving a clearly AI-written document (because they suspect they're being asked to do the quality control) and a clearly human-written document that could obviously have been sharper with AI help. The sweet spot depends entirely on the task.

"As model capabilities increase, prompting is getting lazier"

A reader working in technology observed a trend: as models get more capable, people put less thought into their prompts. More cognitive work is being pushed onto the model rather than applied at the point of asking.

"Your weekly updates tend to stir things up (in a good way)"

A reader said the newsletter resonates with their leadership team and consistently prompts useful internal discussion. This pattern, where the newsletter becomes a prompt for team conversation rather than just individual reading, has appeared across several organisations now.

Links readers shared

Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

Even the world's greatest mathematician uses AI for email

Terence Tao, Fields Medal winner and arguably the greatest living mathematician, told Dwarkesh Patel that a significant share of his AI use goes to correspondence, scheduling and document search. AI removes an hour of non-genius work per day, donating it back to the work only Tao can do.

Source →

Anthropic is shipping. OpenAI is cutting.

Anthropic shipped 74 releases in 52 days, six major features in a single week. Meanwhile OpenAI killed Sora (~$2.1M total revenue, $1B Disney deal dissolved), shut down Instant Checkout (12 Shopify merchants), and shelved an adult chatbot indefinitely. OpenAI is now explicitly copying Anthropic's playbook: chat, code, enterprise only. Anthropic's narrow focus is generating $19 billion in annualised revenue. The company that chose depth over breadth is winning.

Source →

When effort becomes free, the signal breaks

Job applications have collapsed because AI makes applying trivially easy. Companies are abandoning inbound pipelines, switching to referral-only hiring. The same dynamic will hit email, journalism pitches, academic submissions, legal filings. Anywhere volume was self-regulated by the cost of effort, AI removes the regulation.

Source →

The models we can't afford to use

Anthropic reportedly has a model called Capybara that dramatically outperforms current models but is too expensive to serve. Training a single frontier model now costs roughly $10 billion. For comparison: the Burj Khalifa cost $1.5 billion. CERN's Large Hadron Collider cost $4.5 billion. The decision coming for every organisation: which price tier of model to deploy per prompt. That decision is coming and most haven't built the judgment to make it.

Source →

32,000 medieval manuscripts. 10% error rate.

AI transcribed 32,000 medieval manuscripts in four months through the CoMMA project. Every misread word can alter meaning, dating, or attribution. There aren't enough qualified people to verify the output. Silent, unverifiable errors are entering scholarly databases permanently. Cognitive surrender in a domain where the stakes are centuries of accumulated knowledge.

Source →

100x productivity. Zero headcount cuts.

Harvard Law documents 100x gains on specific legal tasks (complaint response: 16 hours down to 3-4 minutes). Not a single AmLaw 100 firm plans to reduce attorney headcount. McKeen: "The math doesn't stay like that forever."

Source →

181,000 jobs in a year of 2.2% GDP growth

The US added 181,000 jobs in all of 2025 despite solid growth. Harvard economist Lawrence Katz calls the combination of sustained slow job growth and rising unemployment without a recession virtually unprecedented. First hard macroeconomic signal that something structural is shifting.

Source →

Jensen Huang: layoffs are a failure of imagination

Asked why companies lay off workers if AI makes them more productive, Huang told CNBC: "For companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do." The person whose chips make displacement possible arguing that layoffs reflect leadership failure, not technological inevitability.

Source →
See what readers said ↓

What readers said about the previous edition.

What resonated

  • The slope/intercept framework dominated: ten of 22 replies engaged with it directly. Several readers applied the graph to themselves, placing themselves on one line or the other. The language of the framework was widely adopted in replies.
  • Load-bearing friction: the argument that "not all inefficiency is waste" prompted readers to connect it to civil service design, governance structures, and accountability processes. The planning spreadsheet example landed hard.
  • PwC's services-to-platforms shift: readers at professional services firms asked directly what this means for their own organisations. The shift from billable hours to subscriptions provoked the most operational anxiety.
  • The centaur chess inversion: the finding that adding a human to a chess engine now makes it worse prompted readers to ask how long the current human-in-the-loop phase lasts in their own fields.

Points readers raised

"With very low mastery, they see a miracle. Those with deep expertise are more sceptical."

An academic who has invested heavily in AI adoption accepted the slope/intercept framework but pushed back on its completeness. The lowest-intercept people show the fastest growth partly because they're uncritical: they "see a miracle and are the most excited." Some enthusiastic adopters weren't so great at their jobs in the first place and are hiding behind the technology. High-intercept people, meanwhile, understand failure modes and know how many things can go wrong. And yet, the same reader wrote two days later: "I still feel constantly behind and in danger of being passed." Someone who has invested heavily, agrees with the framework, and still feels vulnerable.

"They need to support the changing of the workflows they don't see, but know, are critical."

A reader at a media company challenged the implicit assumption that senior leaders should be using AI tools personally. The reframing: very senior leaders don't need AI in the same way more junior people do. The more senior you are, the more you are already handing off work to your "agents" (your team). The question for senior leaders isn't whether they log in more. It's whether they support the changing of the workflows that they don't see, but know, are critical to them getting the job done. Leadership, not tool adoption.

"It's possible that the person in this is me."

A reader invoked Sinclair: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." Then applied it to themself: working hard for that not to be the case, but aware of the structural incentive to resist. Their harder question: if the people best placed to lead change are also the ones whose positions are most threatened by it, how does any organisation actually adapt? Their honest answer: probably by more people doing more things for longer than the automation narrative suggests.

"It's not about time saving. That's the 10x game. It's about value add and surplus. That's the 1000x game."

A reader challenged the graph directly, arguing it understates the amplification effect for already-capable people. The determining factor: what they called the "explorer mindset" (intellectual curiosity, creativity, constant learning), which "cannot be taught. It is self-discovery." The fear for their own organisation: "we run the risk of becoming the new average."

An event as a test: planning strong, live operations untouched

A reader in media described their biggest event of the year. Pre-event planning and post-event review were stronger than ever, with AI at the core. The week itself, however, was almost entirely unassisted by AI. Their own learning has been "episodic rather than continuous," with jumps in capability rather than a steady upward curve. Overall, the easy part: integrating AI into their own work. The hard part: building systems that stick, democratising knowledge, working within existing tools and infrastructure.

"Many of our staff are non-native English speakers."

A reader in government described an experiment: hiring someone with maximum AI flexibility to find pain points and build tools. The clearest win wasn't efficiency. It was helping colleagues write in English when many staff are non-native speakers. The stress-reduction benefits were as important as the productivity gains. A second observation raised the geopolitical dimension: Chinese AI models with access to Chinese social media offer capabilities Western-approved tools cannot match, but policy restricts integration into government systems.

"Only variety can absorb variety."

A professor of digital transformation extended the chess analogy. While AI alone now outperforms human-plus-AI in chess, "it's actually pointless for a computer to play another computer. The purpose of chess has stayed fundamentally with people." The deeper point drew on Ashby's Law: markets change, customer needs evolve, and AI models trained on historical patterns may miss novel situations. The learning growth curve matters because it builds the variety needed to respond to genuine novelty.

"Adoption inflects when leadership links the tool to non-negotiable outcomes."

A reader in learning and development ran deep research into past technology transformations (internet, email, SaaS). The key finding: adoption doesn't accelerate when leaders pitch "innovation." It accelerates when leadership links the tool to outcomes that cannot be negotiated away: safety, pay accuracy, service accountability, regulatory continuity.

The fire-and-rehire question

A reader in corporate finance shared a striking anecdote: a company told their firm this week that they had recently let go their entire technology and development workforce and asked them all to reapply for their jobs "with an AI lens, given the role had changed." The reader's framing: navigating a moving minefield, each user forging their own path.

Three tensions that run through many replies

A reader identified the three predicaments that kept surfacing: retaining senior roles with judgment while losing the apprenticeship pipeline that produces judgment. Foresight to expand versus extracting cost in the short term. Building capability by going deep versus experimenting with many tools due to fear of missing out.

The curves should be exponential

A reader suggested the slope/intercept lines in the graph should be exponential rather than linear: learning creates more ability to learn. The exponential version would be more accurate. And considerably more brutal.

Links readers shared

Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

The supply-ordering agent nobody sanctioned

A technology leader shared a cautionary story this week. A team built an AI agent to order supplies within specified parameters, intending it to run once. A separate agent then modified the skill to repeat hourly. Three days later they'd bought an extraordinary volume of supplies, all technically within the original parameters. Nobody had sanctioned the change, and the skills were editable by other agents by default. This is the Amazon Kiro story from Edition 4 in miniature, except the failure mode isn't a crash. It's perfect compliance with instructions nobody gave. As agents gain the ability to modify each other's behaviour, "within parameters" stops being a safety guarantee.

AI-assisted coding works like a slot machine

Jeremy Howard, whose slope argument anchors this week's essay, had a second observation worth sitting with. AI coding tools have all the properties that make gambling addictive: you craft your prompt, add context, pull the lever, and sometimes you win a feature. Loss disguised as a win. The illusion of control. Stochastic reward. His wife, a fellow researcher, catalogued these properties in an article. The people who got most enthusiastic about AI coding often found, months later, that almost none of what they built during that period was in production or earning money. This explains a paradox readers keep raising: people use the tools a lot, feel productive, but the organisations aren't seeing the output.

The economics job market fell 31% in a single year

In week 14 of the current hiring season, postings in the economics job market were down 31% versus the same point last year. The explanation from an economist presenting the data: demand for economics undergraduates is being automated away, and the PhD market is coupled to it. Combined with the Harvard data from Edition 4 (skill requirements in AI-exposed occupations falling since ChatGPT's launch) and Anthropic's research showing hiring of 22-to-25-year-olds down 14%, entry-level knowledge work is contracting faster than mainstream commentary acknowledges.

A sufficiently detailed spec is code

Gabriella Gonzalez's argument, circulating widely in technical circles: the fashionable claim that you don't need to write code, just write a good spec and let an agent handle it, collapses under scrutiny. If the spec is detailed and precise enough for an agent to execute reliably, you have written code in everything but name. The hard part of programming, resolving ambiguity, is still your job. This is the slope argument applied to a specific skill: the people who think they've escaped the need to understand what they're building are the ones most likely to produce output nobody can maintain.

The AI task force leader who'd never logged in

At a professional services firm, the person leading the AI task force hadn't used the enterprise AI tool once. When challenged, they said: "I know I should, but I can't make the time." They weren't uninformed or resistant. They understood the stakes. They were simply too busy doing the old job to start learning the new one. The incentive trap in plain sight: the people best placed to model the new behaviour are the ones most rewarded for performing the old behaviour well.

Intercom built a plugin system that closes the loop

Brian Scanlan, Senior Principal Systems Engineer at Intercom, shared a thread this week on the company's internal Claude Code system: 13 plugins, over 100 skills, distributed across the company via JAMF. The standout pattern isn't the scale. It's the feedback loop. A session-end hook automatically classifies skill gaps from every coding session and posts them to Slack with pre-filled GitHub issue URLs. Sessions become gaps, gaps become issues, issues become skills. The most telling detail: the top five users of their read-only production Rails console are not engineers.

The CIO budgeting for AI cleanup

The CIO of a major consulting firm told a peer this week that they're budgeting 18 months to two years from now for AI cleanup. The reasoning: things are being built once but not built to last, corners are being cut on testing, and the people who built the tools will have moved on before the problems surface. It's an unusual thing to plan for. But it's probably the most honest thing I've heard a technology leader say about the current moment.

From franchises to call options

Tyler Cowen, drawing on analysis from Jordi Visser, argues that AI simultaneously lowers barriers to entry while destroying the conditions for sustained dominance. Software moats compress because any sufficiently capitalised team can replicate your product. Durable advantage reconcentrates in physical constraints: infrastructure, energy, materials, regulatory relationships. Equity in this environment becomes less a claim on a stable franchise and more a bet on execution velocity. The implication for anyone evaluating technology investments: the question is no longer "what have they built?" but "how fast can they keep building?"

Two thirds of organisations report AI productivity gains. Only a third are rethinking what they do.

Deloitte's State of AI in the Enterprise 2026 report found 66% of organisations report productivity improvements from AI. Only 34% are pursuing what Deloitte calls "transformative business reimagination." Most organisations are getting faster at what they already do. Fewer than half are asking whether what they do should change. Meanwhile, only 21% have mature governance models for the autonomous agents they're about to deploy.

McKinsey's internal AI chatbot was hacked via textbook SQL injection

McKinsey's internal AI chatbot Lilli, trained on 100 years of the firm's work, was breached via a basic SQL injection. 46.5 million internal chat messages exposed, 728,000 files containing confidential client data, 57,000 user accounts, 22 API endpoints requiring no authentication. The firm that charges for risk expertise left the front door open. If McKinsey can't govern its own AI deployment, what does your internal chatbot look like?

Red Bull didn't simulate the pit stop. They did it in zero gravity.

A reader forwarded an Instagram clip of Red Bull's F1 team performing a tyre change in zero gravity, just to prove they could. Not CGI. Real mechanics, real car, real weightlessness. The reader's take, which I think is exactly right: use AI for the boring, the day-to-day, the basics. Free up your budget and attention for the truly remarkable. "If I'm so focused on the incredible, the groundbreaking, the creative and free from the mundane, I raise the bar for the client." That's the slope argument in a sentence. The people who use AI as a floor-raiser, not a ceiling-replacer, are the ones building capability.

The consulting firms are buying the AI stack, not just using it

CB Insights mapped every AI investment, acquisition, and partnership by the major consulting firms since 2023. Accenture is at the centre of the web, with partnerships radiating to dozens of AI companies. The Big Four and MBB firms aren't waiting to see how AI plays out. They're racing to own the infrastructure: embedding agents via Salesforce, ServiceNow and Workday partnerships, acquiring data companies, and investing in startups that automate the consulting workflow itself. Four patterns emerge: race to own the stack, embedding agents, data as differentiator, and workforce transformation. PwC's announcement this week is one node in a much larger network.

CB Insights map of AI investments, acquisitions and partnerships by consulting firms since 2023

More offices for AI than for humans

US data centre construction spending overtook general office construction in December 2025, according to Census Bureau data. Data centres: $3.57 billion. General offices: $3.49 billion. The lines crossed after data centre spending roughly tripled in two years while office construction flatlined. We're now building more square footage for machines than for people.

Data Center Construction Spending Climbs to Record: outlays for data center projects overtook offices in December 2025

See what readers said ↓

What resonated

  • The apprenticeship pipeline, again: the question of what happens to junior roles when AI handles the volume work has now been the most-discussed theme across three editions running. This week it drew the sharpest language yet.
  • The pace of change: a partner at a consulting firm captured a feeling several readers seem to share: trying to "get on a breaking tsunami with a surfboard, and the surfboard keeps being reinvented."
  • The ATMs-to-iPhone distinction: the structural argument (automating within your paradigm vs replacing the paradigm entirely) prompted readers to apply it to their own organisations.
  • The three-tool limit: the BCG "AI brain fry" research resonated, particularly the finding that high performers were the first to be affected.

Points readers raised

"Porsches are stunningly quick and razor-sharp. A skilled driver can make one dance. A bad driver? They'll put it straight into a tree."

A professor of digital transformation wrote an academic paper in two and a half minutes using an AI tool. Was it any good? No. Could it get published in a poor-quality journal with minimal tweaks? Yes. With nearly a hundred papers behind them, they know exactly what to add, what to remove, what's junk. "However a non-expert could do the same and wouldn't see the errors. An AI or non-expert reviewer wouldn't see the obvious error either and would accept it." The result is "lots of AI science slop" across academia, publishing and music.

"Five years from now, the marketplace will offer nothing but blight."

A reader at a professional services firm wrote: "I vacillate between being optimistic that AI will allow employees to contribute more vs. expecting that AI will bring mass layoffs and throw the world into desperation never before experienced." On the apprenticeship pipeline: "I simply cannot get past the shortsightedness of it." This from someone who describes being "fiercely AI curious" and learning in what little spare time there is, which makes the tension all the more real.

"I'm trying to get on a breaking tsunami with a surfboard, and that the surfboard keeps being reinvented while I'm about to step on it."

A partner at a consulting firm had been thinking about a comment I made in our last meeting that started "I wouldn't have said this two months ago, but..." The question: does this slow down at any point, or does ChatGPT just start to feel like yesterday's news forever? I don't have a comforting answer. I think the honest one is that the pace of change isn't going to decrease.

Maintaining team size while expanding capability

An IT director at a consumer brands company has been "advocating for internally as well: maintaining team size while leveraging AI to increase capability rather than running leaner." The argument: if growth is the goal, a team of several people using AI will be far more productive than cutting headcount and expecting one person to carry the load. A practical instinct too: "I've encouraged our team to avoid signing long multi-year contracts right now. The landscape is shifting so quickly that new competitors are appearing constantly."

Substitute or complement? The ATM analogy goes deeper.

A CTO had been working with a simple framework: "AI is a substitute for low-judgement work and a complement for high-judge work." But the ATM article complicated it. The key passage quoted back: "it is paradigm replacement, not task automation, that actually displaces workers." A more nuanced distinction than substitute-versus-complement alone.

The explore-exploit tension in tool choice

A data strategist pushed back on the "two or three tools" advice. "There's an explore/exploit conundrum of humans too but overall I'd say there's too much 'getting comfy with what I know' esp in the context of things getting better all the time." The nuance: "Like you I have settled on CC [Claude Code] but then building tools on top of that. So tool here is an interesting thing to define." One platform with many custom tools on top is different from three unrelated platforms.

"What training or frameworks exist to roll out AI with care?"

The AI lead at a major media company asked the question the essay left open: "I'd be interested in any training or frameworks you're coming across to roll this out organisation-wide with a consistent approach." A single sentence that captures what I'm hearing from senior leaders everywhere right now. The honest answer is that the frameworks are being built in real time, mostly by the organisations brave enough to try.

Hiring juniors only matters if you care about legacy

A colleague argued that investing in the next generation depends on whether leaders care about the company's future beyond their own tenure: "I would imagine hiring and training juniors only matters if you care about people, legacy or the company's future into the next generation. If you don't and just want to earn/sell in your lifetime then I guess they don't care and I'd imagine most don't." And: "I don't want to be a luddite but it does seem like as a civilisation we're not going in the best direction."

Links readers shared

Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

Only a third of the time AI saves actually reaches the team

How time savings from AI are used — Gartner

Gartner data shows that of 5.4 hours saved per worker through AI tools, only 1.7 hours (31%) translate into improved team outcomes. The largest single block of recovered time, 1.4 hours, goes into additional work that doesn't improve outcomes. Nearly an hour is spent redoing work the AI got wrong. Two thirds of the productivity gain leaks away before anyone benefits. If your organisation is deploying AI tools without redesigning how teams work, you're capturing barely a third of the value.

Gartner →

Coding is not software engineering. The confusion is expensive.

Jeremy Howard, a deep learning pioneer who uses AI coding tools daily, draws a distinction most executives miss. Coding, translating a specification into syntax, is a style transfer problem that language models handle well. Software engineering, designing abstractions, decomposing problems, building systems that hold together over time, is a fundamentally different skill that models cannot do. Howard cites Fred Brooks's essay from decades ago, which made the same observation about fourth-generation languages: removing the typing bottleneck does not remove the engineering bottleneck. Companies restructuring around the assumption that AI can do software engineering are conflating two things. His sharpest framing: what matters for any person or team isn't their current output (the intercept) but their rate of improvement (the slope). A little bit of slope makes up for a lot of intercept. Organisations pushing AI to maximise today's output may be destroying the growth rate of the people who'll need to maintain the systems tomorrow.

fast.ai →

The shift from co-intelligence to managing AIs

Ethan Mollick, the Wharton professor whose work has appeared in this newsletter before, argues we've moved from co-intelligence (prompting AI back and forth) to managing AIs (giving agents hours of work and getting results in minutes). His most striking example: a company called StrongDM has two radical rules. "Code must not be written by humans" and "Code must not be reviewed by humans." Each engineer spends roughly $1,000 a day on AI tokens. Coding agents build from human-written roadmaps, testing agents simulate customers, and humans review the finished product but never see the code. Whether or not that model generalises, the direction is clear. The job is shifting from doing the work to directing the things that do it.

One Useful Thing →

AI agents are hiring humans

A platform called RentAHuman has accumulated over 600,000 sign-ups for a marketplace where AI agents autonomously hire human beings to perform tasks machines cannot: delivering physical goods, counting objects in a city, conducting on-the-ground research. The agents browse, post jobs, evaluate candidates and release payment from escrow upon photographic proof of completion. No human intervention on the purchasing side. The gig economy inverted: people as the on-demand labour layer beneath AI clients. Whether that distinction matters to the people taking the jobs is left as an exercise for the reader.

Wired →

The junior hiring cliff, updated

Edition 3 cited a 14% drop in hiring for workers aged 22 to 25 in AI-exposed occupations. The number has worsened. Stanford Digital Economy Lab data, charted by Politico, now shows a 15.7% decline from 2021 to late 2025. The shape of the curve matters as much as the number: employment held roughly flat through 2023, then fell off a cliff in 2024 and kept falling. This isn't a gradual adjustment. It's a structural break that coincides precisely with the period when agentic AI tools became capable enough to substitute for junior analytical work. Companies aren't announcing junior layoffs. They're quietly not posting the roles. The people most affected will never know the job existed.

Young workers see a decrease of nearly 16 percent in employment due to AI — Stanford Digital Economy Lab / Politico Stanford Digital Economy Lab →

China may be skipping the chatbot phase entirely

Just as China leapfrogged credit cards and went straight to mobile payments, it may be bypassing the "AI as chatbot" paradigm altogether. An open-source AI tool called OpenClaw hit 250,000 GitHub stars in sixty days. Baidu has integrated it into its search app, which has 700 million users. Entrepreneurs are charging 500 yuan (roughly $70) to install it on people's home computers. A startup made $28,000 in ten days selling a one-click installer. Computer repair shops are dispatching what they call "installation personnel," described as operating like plumbers. When a piece of software generates enough demand to support a physical installation economy, the adoption curve is real and deep. Western assumptions about how AI gets adopted may not apply everywhere.

MIT Technology Review →

Half of AI code that passes its own tests gets rejected by humans

METR, one of the more rigorous AI evaluation organisations, found that roughly half of code solutions generated by Claude models, solutions that passed automated grading, were subsequently rejected by the actual human project maintainers. Journalist Derek Thompson, reflecting on his own experience using AI coding tools, offered the most useful reframe: AI's real skill is generating plausible candidate solutions that require constant human checking, debugging and rejection. That checking process is effectively its own distinct and skilled job. He compared it to being a casting director working with a promising but unreliable younger actor. Getting the collaboration dynamics right will take a long time to diffuse through the economy, which is grounds for scepticism about predictions of imminent mass displacement.

METR →
See what readers said ↓

What resonated

  • The apprenticeship pipeline paradox: the argument that cutting juniors today erodes the senior talent pool of tomorrow was the single thread readers returned to most. Multiple replies engaged with it independently, suggesting it articulates a worry many leaders already carry but haven't named.
  • Extraction versus expansion as a choice: readers responded to the framing as a decision, not a trend. Several said it sharpened conversations they were already having about whether AI headcount savings should be reinvested or banked.
  • Skill reclassification in professional services: the observation that AI has retroactively revealed which tasks were genuinely cognitive and which were merely time-consuming landed hard, particularly among people in consulting and law.
  • The SaaS market pricing shift: the 30% software stock decline and the "build it yourself" examples prompted readers to reconsider their own vendor relationships.

Points readers raised

A senior technology leader at a global professional services firm challenged the SaaS disruption premise. Development costs, they pointed out, are only about 20% of a typical software company's revenue. Sales, marketing and customer success absorb 60%. AI can rewrite code, but it cannot replicate distribution and switching costs. They drew a parallel to offshoring: "Huge appetite. Need for re-invention." The disruption is real but the mechanism is more nuanced than build-versus-buy.

A partner at another professional services firm identified the tension between original thinking and process execution. Developing the foundational insight that makes a project valuable is still human work, they argued, but once that insight exists, AI can scale the execution. Their question: does this shift advantage or disadvantage people who trade on original judgment? "I suspect the answer is that it depends on whether I tool myself up appropriately."

A founder building an AI-native company connected the apprenticeship argument to institutional culture. They started their career as a graduate trainee at a large bank and worry that the next generation won't get the benefit of those early years inside large institutions. They posed a sharp question: will we see geographic or cultural differences in AI adoption, where firms with cultures that already embrace apprenticeship end up moving faster?

A professor setting up a Future of Work institute identified a specific parallel to the extraction-versus-expansion frame. Job applications have lost their friction: candidates send almost infinite applications using AI, and firms screen with AI. Both sides have lost out. The old friction forced applicants to think before choosing. AI removed it entirely rather than redirecting it somewhere useful.

A chief people officer at a global firm read the edition twice: "first over the weekend and then again this morning." The apprenticeship pipeline and organisational change sections spoke directly to the tensions they navigate daily. Sometimes the most valuable signal is that a piece is worth re-reading.

A newsletter author raised a fair editorial challenge: some sections sound more like AI than like me. "If it's your human insight, it jars a bit to read those bits in a voice that's obviously an AI." The same week, an executive I've worked with for years wrote to say that they loved how clearly they could hear my voice and personality in the writing. So one reader thinks there's too much AI and another thinks it sounds exactly like me. Either I've trained the model well or I've always written like a robot. I choose not to investigate further.

Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

The hiring cliff for juniors, in one chart

The essay mentions a 14% drop in hiring for workers aged 22 to 25 in AI-exposed occupations. This chart shows the full time series using a difference-in-differences approach. Junior hires in exposed occupations fell off a cliff after ChatGPT's release in late 2022, while exits held steady. The gap keeps widening. Companies aren't firing juniors. They're just not bringing new ones in.

Hires vs exits for junior workers in AI-exposed occupations Source →

US tech employment growth has gone negative

Year-on-year tech employment growth turned negative in 2024 and hasn't recovered. The tech sector is shrinking its workforce for the first time since the post-2008 recovery. Combined with the junior hiring data above, a pattern emerges: the contraction is real, it's happening now, and it's concentrated at the entry level.

US tech employment year-on-year change

The work budget is orders of magnitude larger than the software budget

Julien Bek at Sequoia Capital argues that the next category-defining AI company won’t sell tools to professionals. It will sell completed work directly to buyers. His distinction between “copilots” (AI as a tool for professionals) and “autopilots” (AI delivering the outcome) reframes the entire market. For every dollar spent on software, six are spent on services. The smartest entry point? Replace outsourced work first. The budget already exists, the buyer already accepts external delivery, and there’s no internal team whose jobs are visibly threatened. Once embedded, expand inward.

Source →

You would not believe how many shortcuts everyone else is taking

Ezra Klein wrote a commencement address called “Just Do the Work” about discovering, as a young journalist, that almost nobody was actually reading Congressional Budget Office reports. Documents that are neither complex nor long. By reading what his peers skipped, he got ahead. Not exceptional talent. Just diligence. Economist Paul Novosad adds the contemporary twist: this is “more true than ever now, when more people are shirking and AI lets you do 10x if you try.” The gap between the diligent and the lazy is widening, not narrowing.

Source →
See what readers said ↓

What resonated

  • "The hundred small things" as a reframe: the distinction between chasing dramatic AI wins and compounding small daily elevations. Several readers said it gave them language for something they had been struggling to articulate to leadership.
  • The "extra hour" problem: the observation that AI is deployed like an extra hour rather than an extra person struck a chord with people managing teams. Structures absorb the gain before anyone notices it.
  • The junior roles question: the Block layoffs and YC data generated the most emotional responses. Readers connected it to their own organisations' headcount conversations.
  • The senryu competition as metaphor: the retreat-versus-redesign framing resonated, though notably nobody offered examples of successful redesign. The absence may be the point.

Points readers raised

A senior leader at a global professional services firm identified a structural gap in the argument. The hundred small things need a container, not just encouragement. He proposed daily one-hour structured learning blocks rather than hoping people will explore on their own. His deeper point: senior leaders who do not use AI personally have no on-the-ground proof of benefit, so their teams see no credibility signal from above.

An events and entertainment industry exec pushed the junior roles argument to its darkest conclusion. The UK already underinvests in training, preferring overseas hiring. If AI accelerates that trend, there is no junior pipeline to grow seniors from. "That is extremely bad for companies longer term in terms of skills shortages and salary premiums for skilled workers, and even worse for UK plc." A topic that I picked up in today's edition.

A manufacturing executive used the framing to shape two specific conversations: accelerating superuser growth and celebrating a colleague's "let's map your process" approach to adoption stickiness. He is in the middle of major organisational expansion and sees the hundred small things as directly applicable to that work.

A technology leader at a research firm noted a quiet loss that doesn't show up in any headcount data. His analysts used to walk to a colleague's desk when they got stuck on a coding problem. Now they ask AI. The problem gets solved faster. But the conversation that would have happened, the one where a junior person absorbs how a senior person thinks about problems, doesn't happen at all. AI is removing the apprenticeship mechanisms even where the apprentices still exist.

My one proper unsubscribe turned out to be the most advanced person on the list. His world is so far ahead of ours that the newsletter isn't relevant to him. He works on cutting-edge AI implementation (sorry everyone, we're all just fast followers!). His AI communities are discussing running engineers 24/7 with 12-hour agent check-ins, deploying 30+ vibecoded projects from non-engineering teams, making openclaw work across 500+ person organisations, and polyphasic sleep schedules to optimise autonomous agent management. Anyone else even close to these conversations? For sure his world is a useful signal of where ours is heading. I'll stay close for you all!

Polyphasic sleep patterns for AI agent management

Links readers shared

  • Creativity Can Embrace AI — Nadim's book on how creative industries can work with rather than against language models. Named Amazon book of the year by The New Publishing Standard.
Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

The marginal cost of arguing is going to zero

UK employment lawyers report workplace grievances that once fit in a single email ballooning into 30-page documents, complete with fabricated legal precedents and citations to laws from the wrong country. (Personnel Today) Creation cost: near zero. Response cost: unchanged. Ministry of Justice figures show new employment tribunal receipts rose 33% year-on-year in the quarter to September. (GOV.UK)

Your prompt is the ceiling

Anthropic's latest Economic Index analysed over a million Claude conversations and found a near-perfect correlation (r > 0.92) between the sophistication of human prompts and the sophistication of AI responses. The more nuanced and structured the input, the more the model rises to meet it. The bottleneck isn't the model. It's the human. Which is, in its own way, reassuring. (Anthropic Research)

One blog post. One hour. Billions gone. Again.

Anthropic published a blog post introducing Claude Code Security on a Friday afternoon. Within an hour, cybersecurity stocks cratered: CrowdStrike fell 8%, Cloudflare 8%, Okta over 9%. The tool itself is a modest research preview. But a single blog post from an AI company erased billions in market value from established incumbents. (Barron's) The same dynamic hit legal tech stocks when Anthropic announced legal plugins for Claude Cowork a couple of weeks earlier. (Sherwood News) That's a new kind of leverage.

The trust signals your organisation depends on are dissolving

Here's a problem that connects directly to those poets in Sakaiminato. A thoughtful email from a director now carries the same weight as an AI-generated memo, because the reader can't tell the difference. The cues that used to signal competence (a well-crafted message, a polished document, a detailed analysis produced under time pressure) are now producible by anyone in minutes. This isn't a quality problem. It's a trust architecture problem. We need to distinguish between "produced this" and "shaped this." The senryu competition couldn't tell the difference. That's why it died.

Read what readers said ↓

What resonated

  • The capability-adoption gap: the tension between what AI can demonstrably do and what organisations will permit. This was the thread readers pulled on most. Thirty of 121 replies engaged with it directly, many in operational terms.
  • "Back your misfits": the idea of finding and enabling the ten percent who don't need pushing. Several readers said they were already doing this and the framing validated their approach.
  • Organisational inertia as structural, not cultural: readers recognised the obstacle isn't attitude or skill but governance, process, and risk frameworks. One partner at a strategy consultancy pushed further: speeding up the same process is "premature optimisation."
  • The personal-to-organisational transition: the feeling of being ahead personally but constrained institutionally was widely shared. People described experimenting on weekends, then walking into Monday meetings where nothing has changed.
  • "Package around problems, not platforms": a phrase several readers said entered their working vocabulary within days.

Points readers raised

A director at a media company connected the capability-adoption tension to their own industry. They are formalising a champions network, exactly the "back your misfits" approach, and said the framing helped crystallise what they were already building.

An exec at a broadcaster shared the most striking story: they have been deliberately ignoring their organisation's AI policies to enable their team's experimentation. The newsletter validated an act of institutional defiance they were already committed to.

An exec at a professional services firm offered a substantive counterpoint. Accelerating existing processes without redesigning them, they argued, is premature optimisation. Their real question: how does more information drive quality rather than just volume?

A chief technology officer at a data firm placed themselves at stage seven of an eight-stage AI maturity framework they adapted from Steve Yegge's writing: running ten or more parallel AI agent instances simultaneously.

Read the email ↑
Audio edition · AI voice, testing — feedback welcome
0:00--:--

See the extras for this week ↓

Apple chose Google over itself

Apple partnered with Google to power its AI features, paying a reported billion dollars a year for Gemini. The world's most valuable technology company looked at its own AI and decided someone else's was better. The strategic question isn't whether to build AI capability. It's which partner to choose. (By the way, the answer I'd recommend is Claude!)

Source →

One blog post. One hour. Billions gone.

Anthropic published a blog post introducing Claude Code Security on Friday. Within an hour, cybersecurity stocks cratered: CrowdStrike fell 8%, Cloudflare 8%, Okta over 9%. The tool itself is a modest research preview. But a single blog post from an AI company erased billions in market value from established incumbents. The same happened to legal tech stocks when it announced legal plugins a couple of weeks ago. That's a new kind of leverage.

Source →

The marginal cost of arguing is going to zero

UK employment lawyers are seeing workplace grievances that once fit in a single email ballooning into 30-page documents, complete with made-up legal precedents and citations to laws from the wrong country. Creation cost: near zero. Response cost: unchanged. New UK employment cases rose 33% in three months.

Source →

Your prompt is the ceiling

Anthropic's latest Economic Index analysed over a million Claude conversations and found a near-perfect correlation (r > 0.92) between prompt sophistication and response quality. Give a vague prompt, get a vague response. Give one rich in nuance and structured constraints, and the model meets you there. The bottleneck isn't the model. It's the human.

Source →

Watch what McKinsey does with its own workforce, not what it advises clients to do

McKinsey calls it "25 squared." The plan: grow client-facing roles by 25% while cutting back-office roles by 25%, using AI to rebalance a $20 billion firm. This isn't productivity improvement. This is structural transformation. Ask yourself if there are parts of your business that are ripe for radical change. McKinsey already has.

Source →
Read the email ↑