A logical argument, built from what is true today, through the question only you can answer, to what you will encounter when you act, and why the window is narrowing.
5th April 2026Three core skills: comprehension (reading text and images), synthesis (reasoning, analysis, connecting ideas), and writing (including code, and therefore tool use). "Well" matters. The output is already useful, often good, sometimes excellent, today.
A model reads, thinks, writes. Then reads its own output, thinks further, writes again. Research, apps, analytics, computer control: all are read-think-write loops applied to different domains.
"Almost anything" because the loop covers most knowledge work. "Quickly" because compute is fast. "Cheaply" because the marginal cost is negligible versus a human doing the same task. "First draft" because human judgement still owns the final output. The CEO principle: Check, Edit, Own.
Not a slogan. A logical consequence. If you can get a good first draft of almost any knowledge task, at near-zero marginal cost, then the economics of every knowledge-work function are being rewritten. This is true even if capability froze tomorrow.
Everything downstream depends on it: which roles change, which processes to redesign, what to build. And the decision requires genuine knowledge of what AI makes possible, which is why we began by laying out the facts.
Become more efficient and effective at what you already do. Do you get smaller, stay the same size at higher quality, or grow within existing markets?
AI makes some products, services, and markets economically feasible that weren't before. Too hard, too expensive, too complex. A growth path into new territory.
If AI produces good first drafts, the bottleneck moves from production to direction and evaluation. Everyone becomes an orchestrator of AI output: senior leaders, the middle, junior recruits in their first week. We are all managers now.
Traditionally, judgement came slowly, from years of doing the work yourself. In an AI-augmented world, it is learned more quickly through setting up and evaluating, not through doing. A different acquisition path, not a loss.
Without structural adjustment, productivity gains get absorbed. People do the same work to a higher standard, but the organisation does not capture the value. This is an organisational design problem, not a technology problem.
If you don't reorganise roles, teams, and processes, the bottleneck just shifts. Speed up one step and the constraint moves elsewhere: to review queues, to approval chains, to the people making decisions, to the hand-offs between teams.
The honest counterweight. AI touches every function, requires new ways of working, and demands change across multiple dimensions simultaneously. Realism, not pessimism.
This is not about current AI being inadequate. It is already good enough to change everything. The point is that models become more capable on multiple fronts every month, and costs continue to fall. The two compound.
The viable unit of work is shifting. What once required a single prompt-and-check can now run as an extended, multi-step workflow. This shift will continue.
We could write an entire guide on how to approach AI transformation, and we are happy to do so if that would be useful. Ask us for it. For now, we want to point out two high-level implications we see people commonly missing.
Not just a technology team. Not just early adopters. AI touches every knowledge-work role, so adoption must be organisation-wide. This is a people challenge as much as a technology one.
Once you have given people the chance, the encouragement, training, and the support, tough conversations are necessary for people who are not adopting AI.
Build foundational capability in individuals first. Then embed it in team workflows. Then redesign organisational processes. Only then reach for the genuinely new. Each step depends on the ones before it.
Download the full framework as a one-page infographic.
Feedback, ideas or questions? Just email.