<?xml version='1.0' encoding='utf-8'?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0" version="2.0">
  <channel>
    <title>David's Saturday AI Thoughts</title>
    <link>https://steadman.ai/newsletters/david/podcast.html</link>
    <description>Each Saturday, David Boyle reflects on what feels important in the world of AI. Not the breathless hype or the doom. The practical, analytical perspective: what happened this week, what it means for people who use language models in their work, and what to try next.

David is Director of Audience Strategies and co-founder of Steadman. He advises organisations from L.E.K. Consulting to the BBC on AI adoption. This podcast is a spoken-word version of his Saturday AI Thoughts newsletter, with different voices for each section.</description>
    <language>en-gb</language>
    <copyright>© 2026 Steadman AI</copyright>
    <generator>generate_podcast_feed.py</generator>
    <atom:link href="https://steadman.ai/newsletters/david/podcast/feed.xml" rel="self" type="application/rss+xml" />
    <itunes:author>David Boyle</itunes:author>
    <itunes:summary>Each Saturday, David Boyle reflects on what feels important in the world of AI. Not the breathless hype or the doom. The practical, analytical perspective: what happened this week, what it means for people who use language models in their work, and what to try next.

David is Director of Audience Strategies and co-founder of Steadman. He advises organisations from L.E.K. Consulting to the BBC on AI adoption. This podcast is a spoken-word version of his Saturday AI Thoughts newsletter, with different voices for each section.</itunes:summary>
    <itunes:explicit>false</itunes:explicit>
    <itunes:type>episodic</itunes:type>
    <itunes:owner>
      <itunes:name>David Boyle</itunes:name>
      <itunes:email>david@steadman.ai</itunes:email>
    </itunes:owner>
    <itunes:image href="https://steadman.ai/newsletters/david/podcast/artwork.jpg" />
    <itunes:category text="Technology">
      <itunes:category text="Tech News" />
    </itunes:category>
    <podcast:locked>yes</podcast:locked>
    <item>
      <title>Rise of the auditors</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-04-25</link>
      <description>AI-native teams need three roles: Director, Builder, Auditor. Execution is cheap, verification is expensive. Most organisations have zero Auditors and are shipping nothing because nobody is named to check.

What happened this week:
* &lt;10% of organisations scale agents beyond pilots (McKinsey)
* GitHub paused Copilot signups / Uber blew 2026 AI budget / Goldman inference costs approaching headcount parity
* 29% of employees sabotaging AI initiatives (Writer survey)

What to try:
* Ask is this the simplest version (Cantrill laziness)
* Audit cold: different model, fresh context
* Save one reusable AI workflow (Chrome Skills)

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-25</description>
      <content:encoded>&lt;p&gt;AI-native teams need three roles: Director, Builder, Auditor. Execution is cheap, verification is expensive. Most organisations have zero Auditors and are shipping nothing because nobody is named to check.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&amp;lt;10% of organisations scale agents beyond pilots (McKinsey)&lt;/li&gt;
&lt;li&gt;GitHub paused Copilot signups / Uber blew 2026 AI budget / Goldman inference costs approaching headcount parity&lt;/li&gt;
&lt;li&gt;29% of employees sabotaging AI initiatives (Writer survey)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Ask is this the simplest version (Cantrill laziness)&lt;/li&gt;
&lt;li&gt;Audit cold: different model, fresh context&lt;/li&gt;
&lt;li&gt;Save one reusable AI workflow (Chrome Skills)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-04-25"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-04-25.mp3" length="9681547" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-04-25.mp3</guid>
      <pubDate>Sat, 25 Apr 2026 09:00:00 +0100</pubDate>
      <itunes:title>Rise of the auditors</itunes:title>
      <itunes:episode>10</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>12:26</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>AI-native teams need three roles: Director, Builder, Auditor. Execution is cheap, verification is expensive. Most organisations have zero Auditors and are shipping nothing because nobody is named to check.</itunes:summary>
      <podcast:transcript url="https://steadman.ai/newsletters/david/podcast/transcripts/2026-04-25.vtt" type="text/vtt" language="en" />
    </item>
    <item>
      <title>The proxy break</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-04-18</link>
      <description>AI broke the old proxy (good writing = good thinking) but the new proxy ('sounds like AI' = no thinking) is equally unreliable. A friend's challenge prompted a deeper question: evaluate thinking, not wording. Two tests proposed: Quality (does the argument hold under pressure?) and Ownership (CEO principle). Fine-tuned models now preferred over human writing 62% of the time.

What happened this week:
* AI cover letters killed the signal: Freelancer.com study shows Goodhart's Law in action, better letters no longer predict better hires
* Snap cut 1,000 jobs (16% workforce), AI writes 65% of new code, $500M annualised savings: substitution model has arrived
* Passive AI delegation erodes confidence, pushing back strengthens it: 2,000-person study. Gartner: of 5.4hr saved, only 0.6hr reduces working time

What to try:
* Ask what keeps people awake at night, not how AI can help: surfaces real problems with AI solutions
* Let the model research you before writing custom instructions: web search + self-portrait generates better instructions than manual writing
* Find where your AI value sits: AI Value Map interactive tool, five questions on value allocation then five on capture

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-18</description>
      <content:encoded>&lt;p&gt;AI broke the old proxy (good writing = good thinking) but the new proxy (&amp;#x27;sounds like AI&amp;#x27; = no thinking) is equally unreliable. A friend&amp;#x27;s challenge prompted a deeper question: evaluate thinking, not wording. Two tests proposed: Quality (does the argument hold under pressure?) and Ownership (CEO principle). Fine-tuned models now preferred over human writing 62% of the time.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;AI cover letters killed the signal: Freelancer.com study shows Goodhart&amp;#x27;s Law in action, better letters no longer predict better hires&lt;/li&gt;
&lt;li&gt;Snap cut 1,000 jobs (16% workforce), AI writes 65% of new code, $500M annualised savings: substitution model has arrived&lt;/li&gt;
&lt;li&gt;Passive AI delegation erodes confidence, pushing back strengthens it: 2,000-person study. Gartner: of 5.4hr saved, only 0.6hr reduces working time&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Ask what keeps people awake at night, not how AI can help: surfaces real problems with AI solutions&lt;/li&gt;
&lt;li&gt;Let the model research you before writing custom instructions: web search + self-portrait generates better instructions than manual writing&lt;/li&gt;
&lt;li&gt;Find where your AI value sits: AI Value Map interactive tool, five questions on value allocation then five on capture&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-04-18"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-04-18.mp3" length="8380504" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-04-18.mp3</guid>
      <pubDate>Sat, 18 Apr 2026 09:00:00 +0100</pubDate>
      <itunes:title>The proxy break</itunes:title>
      <itunes:episode>9</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>10:41</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>AI broke the old proxy (good writing = good thinking) but the new proxy ('sounds like AI' = no thinking) is equally unreliable. A friend's challenge prompted a deeper question: evaluate thinking, not wording. Two tests proposed: Quality (does the ...</itunes:summary>
    </item>
    <item>
      <title>What a day can do</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-04-11</link>
      <description>Team-level AI infrastructure can precede and contain individual training. The cost of encoding how a team works into shared reusable tools just dropped from hours to minutes with Gen 2 tools (Claude Code + transcripts). A small jewellery company built thirteen shared skills in a day. Step two doesn't just follow step one, it can contain it.

What happened this week:
* Claude Code now writes 4% of all GitHub commits, doubled in six weeks; Anthropic run rate $30B (up from $9B at end of 2025), Claude Code alone $2.5B; projected 20% of commits by December
* Goldman Sachs quantified AI's net labour market drag: -25k jobs substituted + 9k augmented = 16k net monthly loss; entry-level-to-experienced wage gap widened 3.3pp. But CFO surveys put genuine AI ...
* Meta's internal tokenmaxxing leaderboard: 85k+ employees, 60T tokens in one month, Zuckerberg not in top 250. Rewards orchestration over outcomes. Incentivise use yes, incentivise maxxing no

What to try:
* Start with critique, not creation. Brand voice evaluator was diagnosis-only; teams fear proofreaders less than replacements. Nobody fights the spellchecker
* Ask what keeps people up at night, not what they want AI to do. First question surveys existing habits; second surfaces unmet needs. Almost nothing appears on both lists
* Show your team how others use AI. 515-startup field experiment: case studies alone led to 44% more AI usage, 1.9x revenue, 39% less capital needed ('the mapping problem')

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-11</description>
      <content:encoded>&lt;p&gt;Team-level AI infrastructure can precede and contain individual training. The cost of encoding how a team works into shared reusable tools just dropped from hours to minutes with Gen 2 tools (Claude Code + transcripts). A small jewellery company built thirteen shared skills in a day. Step two doesn&amp;#x27;t just follow step one, it can contain it.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Claude Code now writes 4% of all GitHub commits, doubled in six weeks; Anthropic run rate $30B (up from $9B at end of 2025), Claude Code alone $2.5B; projected 20% of commits by December&lt;/li&gt;
&lt;li&gt;Goldman Sachs quantified AI&amp;#x27;s net labour market drag: -25k jobs substituted + 9k augmented = 16k net monthly loss; entry-level-to-experienced wage gap widened 3.3pp. But CFO surveys put genuine AI impact at just 0.4%, while Challenger reports 15,341 job cuts blamed on AI in March alone&lt;/li&gt;
&lt;li&gt;Meta&amp;#x27;s internal tokenmaxxing leaderboard: 85k+ employees, 60T tokens in one month, Zuckerberg not in top 250. Rewards orchestration over outcomes. Incentivise use yes, incentivise maxxing no&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Start with critique, not creation. Brand voice evaluator was diagnosis-only; teams fear proofreaders less than replacements. Nobody fights the spellchecker&lt;/li&gt;
&lt;li&gt;Ask what keeps people up at night, not what they want AI to do. First question surveys existing habits; second surfaces unmet needs. Almost nothing appears on both lists&lt;/li&gt;
&lt;li&gt;Show your team how others use AI. 515-startup field experiment: case studies alone led to 44% more AI usage, 1.9x revenue, 39% less capital needed (&amp;#x27;the mapping problem&amp;#x27;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-04-11"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-04-11.mp3" length="8939512" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-04-11.mp3</guid>
      <pubDate>Sat, 11 Apr 2026 09:00:00 +0100</pubDate>
      <itunes:title>What a day can do</itunes:title>
      <itunes:episode>8</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>11:28</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>Team-level AI infrastructure can precede and contain individual training. The cost of encoding how a team works into shared reusable tools just dropped from hours to minutes with Gen 2 tools (Claude Code + transcripts). A small jewellery company b...</itunes:summary>
    </item>
    <item>
      <title>What is your organisation actually for?</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-04-04</link>
      <description>Organisations say they're production systems but behave like human systems. The revealed preference is togetherness, not efficiency. AI adoption reverts because training optimises for individual productivity while the real binding force is human collaboration.

What happened this week:
* Dorsey wants to replace org charts with AI world models (Block restructuring)
* Mollick says de-weirding AI is a mistake; hidden AI use is the harder problem
* Zapier raised the AI fluency hiring bar: slope not snapshot, accountability added

What to try:
* Have AI interview you before building anything
* Ask AI what looks weird before analysing data
* Let AI be the app: build skills not standalone software

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-04</description>
      <content:encoded>&lt;p&gt;Organisations say they&amp;#x27;re production systems but behave like human systems. The revealed preference is togetherness, not efficiency. AI adoption reverts because training optimises for individual productivity while the real binding force is human collaboration.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Dorsey wants to replace org charts with AI world models (Block restructuring)&lt;/li&gt;
&lt;li&gt;Mollick says de-weirding AI is a mistake; hidden AI use is the harder problem&lt;/li&gt;
&lt;li&gt;Zapier raised the AI fluency hiring bar: slope not snapshot, accountability added&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Have AI interview you before building anything&lt;/li&gt;
&lt;li&gt;Ask AI what looks weird before analysing data&lt;/li&gt;
&lt;li&gt;Let AI be the app: build skills not standalone software&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-04-04"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-04-04.mp3" length="8504907" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-04-04.mp3</guid>
      <pubDate>Sat, 04 Apr 2026 09:00:00 +0100</pubDate>
      <itunes:title>What is your organisation actually for?</itunes:title>
      <itunes:episode>7</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>10:51</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>Organisations say they're production systems but behave like human systems. The revealed preference is togetherness, not efficiency. AI adoption reverts because training optimises for individual productivity while the real binding force is human c...</itunes:summary>
    </item>
    <item>
      <title>The system and the surrender (plz fix!)</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-03-28</link>
      <description>A Wharton study of 1,372 people identified 'cognitive surrender': when AI produces an answer, people stop questioning it while recoding it as their own judgment. Accuracy drops from 45.8% alone to 31.5% with incorrect AI. The better the system gets, the harder it becomes to stay vigilant inside it.

What happened this week:
* Three CEOs (Coca-Cola Quincey, Walmart McMillon, Adobe Narayen) stepped down in one quarter citing AI transformation pressure; 38 years of tenure in one turnover, most since 1999
* Anthropic 5th Economic Index + HBR 2,500-employee study: experienced users (6+ months) treat AI as thinking partner, not productivity shortcut. AI may be skill-biased tech that compounds existing a...
* Ethan Mollick: companies with zero AI failures aren't being ambitious enough. R&amp;D-style experimental budgets need to reach HR, operations, finance

What to try:
* Don't fact-check AI in the same conversation: model defends its own chain. Start fresh, upload source materials cold for critique
* Give your AI reviewer a persona with skin in the game: six senior-partner personas converged on the same systematic error a neutral reviewer missed
* After every good session, turn it into a reusable skill: capture what 'good' looks like the moment you've achieved it, before memory fades

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-03-28</description>
      <content:encoded>&lt;p&gt;A Wharton study of 1,372 people identified &amp;#x27;cognitive surrender&amp;#x27;: when AI produces an answer, people stop questioning it while recoding it as their own judgment. Accuracy drops from 45.8% alone to 31.5% with incorrect AI. The better the system gets, the harder it becomes to stay vigilant inside it.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Three CEOs (Coca-Cola Quincey, Walmart McMillon, Adobe Narayen) stepped down in one quarter citing AI transformation pressure; 38 years of tenure in one turnover, most since 1999&lt;/li&gt;
&lt;li&gt;Anthropic 5th Economic Index + HBR 2,500-employee study: experienced users (6+ months) treat AI as thinking partner, not productivity shortcut. AI may be skill-biased tech that compounds existing advantages&lt;/li&gt;
&lt;li&gt;Ethan Mollick: companies with zero AI failures aren&amp;#x27;t being ambitious enough. R&amp;amp;D-style experimental budgets need to reach HR, operations, finance&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Don&amp;#x27;t fact-check AI in the same conversation: model defends its own chain. Start fresh, upload source materials cold for critique&lt;/li&gt;
&lt;li&gt;Give your AI reviewer a persona with skin in the game: six senior-partner personas converged on the same systematic error a neutral reviewer missed&lt;/li&gt;
&lt;li&gt;After every good session, turn it into a reusable skill: capture what &amp;#x27;good&amp;#x27; looks like the moment you&amp;#x27;ve achieved it, before memory fades&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-03-28"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-03-28.mp3" length="8580369" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-03-28.mp3</guid>
      <pubDate>Sat, 28 Mar 2026 09:00:00 +0000</pubDate>
      <itunes:title>The system and the surrender (plz fix!)</itunes:title>
      <itunes:episode>6</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>10:59</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>A Wharton study of 1,372 people identified 'cognitive surrender': when AI produces an answer, people stop questioning it while recoding it as their own judgment. Accuracy drops from 45.8% alone to 31.5% with incorrect AI. The better the system get...</itunes:summary>
    </item>
    <item>
      <title>Reckoning and slope</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-03-21</link>
      <description>The gap between AI wonder and behaviour change. PwC's CEO says get with it or get out, but the real question is get with what. Jeremy Howard's slope-over-intercept frame: capability growth matters more than current output. Anthropic's research shows most coding tool users enter autopilot. Not all inefficiency is waste: some friction is load-bearing.

What happened this week:
* Centaur chess inverted: in 2005 amateurs with laptops beat grandmasters, by 2026 adding a human to an engine makes it worse. Carlsen deliberately limits AI prep to maintain self-generated understan...
* Jensen Huang's benchmark: $500K engineer should consume $250K in AI tokens. But token spend is an intercept metric that says nothing about slope
* RentAHuman: 600,000 sign-ups to a platform where AI agents hire humans for physical tasks. Inverts the usual displacement narrative

What to try:
* Mine your own email archive: pull months of correspondence on a topic, ask AI to synthesise the intellectual arc
* End every AI session the way a developer commits code: one line documenting context for the next session
* Use AI to teach you, not just to do things for you: Bloom's two-sigma tutoring now costs a subscription

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-03-21</description>
      <content:encoded>&lt;p&gt;The gap between AI wonder and behaviour change. PwC&amp;#x27;s CEO says get with it or get out, but the real question is get with what. Jeremy Howard&amp;#x27;s slope-over-intercept frame: capability growth matters more than current output. Anthropic&amp;#x27;s research shows most coding tool users enter autopilot. Not all inefficiency is waste: some friction is load-bearing.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Centaur chess inverted: in 2005 amateurs with laptops beat grandmasters, by 2026 adding a human to an engine makes it worse. Carlsen deliberately limits AI prep to maintain self-generated understanding&lt;/li&gt;
&lt;li&gt;Jensen Huang&amp;#x27;s benchmark: $500K engineer should consume $250K in AI tokens. But token spend is an intercept metric that says nothing about slope&lt;/li&gt;
&lt;li&gt;RentAHuman: 600,000 sign-ups to a platform where AI agents hire humans for physical tasks. Inverts the usual displacement narrative&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Mine your own email archive: pull months of correspondence on a topic, ask AI to synthesise the intellectual arc&lt;/li&gt;
&lt;li&gt;End every AI session the way a developer commits code: one line documenting context for the next session&lt;/li&gt;
&lt;li&gt;Use AI to teach you, not just to do things for you: Bloom&amp;#x27;s two-sigma tutoring now costs a subscription&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-03-21"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-03-21.mp3" length="7571696" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-03-21.mp3</guid>
      <pubDate>Sat, 21 Mar 2026 09:00:00 +0000</pubDate>
      <itunes:title>Reckoning and slope</itunes:title>
      <itunes:episode>5</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>9:38</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>The gap between AI wonder and behaviour change. PwC's CEO says get with it or get out, but the real question is get with what. Jeremy Howard's slope-over-intercept frame: capability growth matters more than current output. Anthropic's research sho...</itunes:summary>
    </item>
    <item>
      <title>The power and the care</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-03-14</link>
      <description>The dual experience of AI acceleration: excitement and terror, said in that order. Top builders 3-5x more productive, median only 10-20%. The gap widens. Amazon outages show what happens when power outpaces care. Skill requirements dropping in AI-exposed jobs. The organisations that will matter are those building in care alongside speed.

What happened this week:
* Shopify CEO ran AI optimisation against 20-year-old Liquid engine: 53% faster parse/render, 61% fewer allocations. Legacy code isn't too embedded to improve, it's too embedded not to try
* BCG/UC Riverside research (1,488 workers): AI productivity peaks at 3 tools then collapses. 'AI brain fry' causes 33% more decision fatigue, 39% more major mistakes. High performers hit first
* ATMs didn't kill bank tellers, the iPhone did. Automating tasks within current structure creates adjacent roles. Redesigning the structure from scratch eliminates them (David Oks, a16z)

What to try:
* Simulate the toughest reader: create AI versions of board members from known priorities and past questions, run every document past them before a human sees it
* Delete the headline, ask AI what it should be: if the model's headline differs from what the colleague wrote, it reveals a disconnect between message and evidence
* Fix the instructions, not just the output: update custom instructions, project briefs, context files after every miss. Output matters today, instructions compound forever

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-03-14</description>
      <content:encoded>&lt;p&gt;The dual experience of AI acceleration: excitement and terror, said in that order. Top builders 3-5x more productive, median only 10-20%. The gap widens. Amazon outages show what happens when power outpaces care. Skill requirements dropping in AI-exposed jobs. The organisations that will matter are those building in care alongside speed.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Shopify CEO ran AI optimisation against 20-year-old Liquid engine: 53% faster parse/render, 61% fewer allocations. Legacy code isn&amp;#x27;t too embedded to improve, it&amp;#x27;s too embedded not to try&lt;/li&gt;
&lt;li&gt;BCG/UC Riverside research (1,488 workers): AI productivity peaks at 3 tools then collapses. &amp;#x27;AI brain fry&amp;#x27; causes 33% more decision fatigue, 39% more major mistakes. High performers hit first&lt;/li&gt;
&lt;li&gt;ATMs didn&amp;#x27;t kill bank tellers, the iPhone did. Automating tasks within current structure creates adjacent roles. Redesigning the structure from scratch eliminates them (David Oks, a16z)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Simulate the toughest reader: create AI versions of board members from known priorities and past questions, run every document past them before a human sees it&lt;/li&gt;
&lt;li&gt;Delete the headline, ask AI what it should be: if the model&amp;#x27;s headline differs from what the colleague wrote, it reveals a disconnect between message and evidence&lt;/li&gt;
&lt;li&gt;Fix the instructions, not just the output: update custom instructions, project briefs, context files after every miss. Output matters today, instructions compound forever&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-03-14"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-03-14.mp3" length="6835551" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-03-14.mp3</guid>
      <pubDate>Sat, 14 Mar 2026 09:00:00 +0000</pubDate>
      <itunes:title>The power and the care</itunes:title>
      <itunes:episode>4</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>8:44</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>The dual experience of AI acceleration: excitement and terror, said in that order. Top builders 3-5x more productive, median only 10-20%. The gap widens. Amazon outages show what happens when power outpaces care. Skill requirements dropping in AI-...</itunes:summary>
    </item>
    <item>
      <title>Extraction or expansion</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-03-07</link>
      <description>Leaders deploying AI face a binary choice: extraction (cut costs from existing operations) or expansion (grow what the organisation is capable of). The apprenticeship pipeline paradox: if juniors never do the grunt work AI now handles, how do they develop the judgment that makes seniors valuable? Hiring as deliberate investment, not necessity.

What happened this week:
* Software stocks down ~30% since October while broader tech flat (Salesforce, Adobe, ServiceNow each down 25-30%). Market pricing in structural shift as AI enables smaller tools to replace enterpris...
* Goldman Sachs 'AI-nxiety': 70% of S&amp;P 500 discussed AI on earnings calls, only 1% quantified impact. Median reported gain 30%, concentrated in customer support and software development only
* AI retroactively reclassified legal work: general-purpose Claude outperforms expensive legal AI products. Tasks billed at premium rates for decades revealed as procedural, not cognitive

What to try:
* Run AI and human on same task, focus on disagreements: 78% overlap in one test, but value was at the edges where only one method surfaced findings
* Map your org on Gen 1 vs Gen 2 AI tools landscape: most orgs stuck on constrained defaults, need parallel strategies for the many (move to competent use) and the best (accelerate with agentic tools)

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-03-07</description>
      <content:encoded>&lt;p&gt;Leaders deploying AI face a binary choice: extraction (cut costs from existing operations) or expansion (grow what the organisation is capable of). The apprenticeship pipeline paradox: if juniors never do the grunt work AI now handles, how do they develop the judgment that makes seniors valuable? Hiring as deliberate investment, not necessity.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Software stocks down ~30% since October while broader tech flat (Salesforce, Adobe, ServiceNow each down 25-30%). Market pricing in structural shift as AI enables smaller tools to replace enterprise suites&lt;/li&gt;
&lt;li&gt;Goldman Sachs &amp;#x27;AI-nxiety&amp;#x27;: 70% of S&amp;amp;P 500 discussed AI on earnings calls, only 1% quantified impact. Median reported gain 30%, concentrated in customer support and software development only&lt;/li&gt;
&lt;li&gt;AI retroactively reclassified legal work: general-purpose Claude outperforms expensive legal AI products. Tasks billed at premium rates for decades revealed as procedural, not cognitive&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Run AI and human on same task, focus on disagreements: 78% overlap in one test, but value was at the edges where only one method surfaced findings&lt;/li&gt;
&lt;li&gt;Map your org on Gen 1 vs Gen 2 AI tools landscape: most orgs stuck on constrained defaults, need parallel strategies for the many (move to competent use) and the best (accelerate with agentic tools)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-03-07"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-03-07.mp3" length="8132501" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-03-07.mp3</guid>
      <pubDate>Sat, 07 Mar 2026 09:00:00 +0000</pubDate>
      <itunes:title>Extraction or expansion</itunes:title>
      <itunes:episode>3</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>10:16</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>Leaders deploying AI face a binary choice: extraction (cut costs from existing operations) or expansion (grow what the organisation is capable of). The apprenticeship pipeline paradox: if juniors never do the grunt work AI now handles, how do they...</itunes:summary>
    </item>
    <item>
      <title>The hundred small things</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-02-28</link>
      <description>AI value sits not in dramatic one-off wins but in a hundred small daily elevations (better meeting prep, cleaner drafts, faster document scanning) that compound into transformation. Firms can't see it because they track big projects, not distributed micro-gains. The Japanese senryu poetry cancellation illustrates the risk: AI alone produces sameness, AI plus human steering produces something better.

What happened this week:
* Citrini Research fiction set in 2028: when AI makes expertise cheap, clients stop buying it. 'A lot of what people called relationships was simply friction with a friendly face'
* Block cut from 10,000 to under 6,000 people while growing gross profit 24%. Y Combinator founders planning to eliminate all engineers below senior level. Market valued each eliminated Block role at...
* Google Gemini 3.1 Pro scored 77.1% on ARC-AGI-2 (double predecessor in 3 months). Model choice matters less; application layer and workflow matter more

What to try:
* Run your day through AI at 6pm: daily five-paragraph reflection from meeting notes, voice recordings, emails. Pattern recognition compounds over weeks
* Before building training, check the settings menu: ask five people which model they use and whether they've changed settings. Answers reveal maturity better than any survey
* Push record, think aloud, send to AI: skip the blank page for case studies, knowledge sharing, personal reflection

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-02-28</description>
      <content:encoded>&lt;p&gt;AI value sits not in dramatic one-off wins but in a hundred small daily elevations (better meeting prep, cleaner drafts, faster document scanning) that compound into transformation. Firms can&amp;#x27;t see it because they track big projects, not distributed micro-gains. The Japanese senryu poetry cancellation illustrates the risk: AI alone produces sameness, AI plus human steering produces something better.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Citrini Research fiction set in 2028: when AI makes expertise cheap, clients stop buying it. &amp;#x27;A lot of what people called relationships was simply friction with a friendly face&amp;#x27;&lt;/li&gt;
&lt;li&gt;Block cut from 10,000 to under 6,000 people while growing gross profit 24%. Y Combinator founders planning to eliminate all engineers below senior level. Market valued each eliminated Block role at ~$1.5M enterprise value&lt;/li&gt;
&lt;li&gt;Google Gemini 3.1 Pro scored 77.1% on ARC-AGI-2 (double predecessor in 3 months). Model choice matters less; application layer and workflow matter more&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Run your day through AI at 6pm: daily five-paragraph reflection from meeting notes, voice recordings, emails. Pattern recognition compounds over weeks&lt;/li&gt;
&lt;li&gt;Before building training, check the settings menu: ask five people which model they use and whether they&amp;#x27;ve changed settings. Answers reveal maturity better than any survey&lt;/li&gt;
&lt;li&gt;Push record, think aloud, send to AI: skip the blank page for case studies, knowledge sharing, personal reflection&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-02-28"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-02-28.mp3" length="7839489" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-02-28.mp3</guid>
      <pubDate>Sat, 28 Feb 2026 09:00:00 +0000</pubDate>
      <itunes:title>The hundred small things</itunes:title>
      <itunes:episode>2</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>9:59</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>AI value sits not in dramatic one-off wins but in a hundred small daily elevations (better meeting prep, cleaner drafts, faster document scanning) that compound into transformation. Firms can't see it because they track big projects, not distribut...</itunes:summary>
    </item>
    <item>
      <title>The wonder and the weight</title>
      <link>https://steadman.ai/newsletters/david/#edition-2026-02-21</link>
      <description>The tension between individual excitement about AI (senior leaders working weekends, doing a week's work in ten minutes) and organisational inertia (unchanged meeting cadences, team structures, adoption rates). 84% of the world has never used AI.

What happened this week:
* Technical barrier gone, domain expertise matters now: Wharton MBA students built companies in four days with AI tools, non-technical students went furthest
* Apple chose Google over itself for AI features, paying a billion a year for Gemini: build vs buy resolved in favour of partnerships
* McKinsey '25 squared': grow client-facing 25%, cut back-office 25% using AI to rebalance a $20B firm

What to try:
* Find your misfits: identify the one-in-ten AI enthusiasts, free them from reporting lines, give them mandate beyond old job description
* Change the cadence: if AI makes execution faster, review more often (daily not every two days). Review cycle is now the bottleneck
* Package around problems not platforms: 'this is your sales coach' drives adoption, 'look at our new AI tool' doesn't

Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-02-21</description>
      <content:encoded>&lt;p&gt;The tension between individual excitement about AI (senior leaders working weekends, doing a week&amp;#x27;s work in ten minutes) and organisational inertia (unchanged meeting cadences, team structures, adoption rates). 84% of the world has never used AI.&lt;/p&gt;
&lt;h3&gt;What happened this week&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Technical barrier gone, domain expertise matters now: Wharton MBA students built companies in four days with AI tools, non-technical students went furthest&lt;/li&gt;
&lt;li&gt;Apple chose Google over itself for AI features, paying a billion a year for Gemini: build vs buy resolved in favour of partnerships&lt;/li&gt;
&lt;li&gt;McKinsey &amp;#x27;25 squared&amp;#x27;: grow client-facing 25%, cut back-office 25% using AI to rebalance a $20B firm&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;What to try&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Find your misfits: identify the one-in-ten AI enthusiasts, free them from reporting lines, give them mandate beyond old job description&lt;/li&gt;
&lt;li&gt;Change the cadence: if AI makes execution faster, review more often (daily not every two days). Review cycle is now the bottleneck&lt;/li&gt;
&lt;li&gt;Package around problems not platforms: &amp;#x27;this is your sales coach&amp;#x27; drives adoption, &amp;#x27;look at our new AI tool&amp;#x27; doesn&amp;#x27;t&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://steadman.ai/newsletters/david/#edition-2026-02-21"&gt;Read the full edition with all links and sources&lt;/a&gt;&lt;/p&gt;</content:encoded>
      <enclosure url="https://steadman.ai/newsletters/david/podcast/episodes/2026-02-21.mp3" length="5015223" type="audio/mpeg" />
      <guid isPermaLink="true">https://steadman.ai/newsletters/david/podcast/episodes/2026-02-21.mp3</guid>
      <pubDate>Sat, 21 Feb 2026 09:00:00 +0000</pubDate>
      <itunes:title>The wonder and the weight</itunes:title>
      <itunes:episode>1</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:duration>6:21</itunes:duration>
      <itunes:explicit>false</itunes:explicit>
      <itunes:author>David Boyle</itunes:author>
      <itunes:summary>The tension between individual excitement about AI (senior leaders working weekends, doing a week's work in ten minutes) and organisational inertia (unchanged meeting cadences, team structures, adoption rates). 84% of the world has never used AI.</itunes:summary>
    </item>
    <lastBuildDate>Sat, 25 Apr 2026 16:51:14 +0000</lastBuildDate>
  </channel>
</rss>
