What Is Your Organisation Actually For?
What's on my mind
A year ago, a manager at a media company told me he could now do the work of his entire team. Fifteen people. I caught up with him recently. All fifteen are still there.
The rational logic for change has never been clearer. Jack Dorsey is reorganising his 14,000-person company since AI can replace much of what corporate hierarchy exists to do. I believe it can. An insurance founder I met has gone further: his new company has one and a half employees, AI handling the rest. But of hundreds of leaders I've spoken to, he is the only one.
One out of hundreds. Something else is going on.
Economists distinguish stated from revealed preferences - what people say they want versus what their behaviour shows. It applies to organisations too.
Ask a leader what their organisation is for and you get a stated preference: we make great music, we serve our clients, we make money. These all treat the organisation as a production system. If that's the goal, AI is obviously transformative.
But look at revealed preferences. People complain about meetings and fill diaries with them. They stay in roles that don't maximise their output because the team, the rhythm, the relationships matter more than they will say.
Organisations aren't production systems that happen to contain humans. They're often more like human systems that happen to produce things. The reason the manager kept fifteen people is that without them there's no place to be. Not a more efficient place. No place at all.
I train senior teams to use AI. The sessions produce genuine wonder. Three weeks later, many have reverted.
I used to think the problem was the training. I don't any more. Training optimises for individual productivity: do your work faster, with fewer dependencies on colleagues. But if the organisation's real binding force is human collaboration, then making one person radically more productive in isolation is not a gift. It is a threat to the thing they actually value.
The system corrects. Meetings fill back in. Colleagues keep working the same way because the relationships are the point. Managers reward visible collaboration over invisible efficiency.
This is not resistance. It is gravity. And you can't beat gravity by telling people to try harder.
Each big decision organisations face depends on whether you think your organisation is more of a production system or a human one.
Should you hire graduates? AI can do entry-level work faster and cheaper. The production-system answer: hire fewer, or stop. But developing someone over years is not just a production decision. It feels more like what many firms are actually for.
Should senior people work alone with AI? This week a leader told me he uses his team far less. AI is faster than briefing them and waiting for something mediocre. He then argued the time saved should go to mentoring junior staff. He automated one form of human collaboration and wanted to replace it with another! A senior person toiling alone with a laptop is a freelancer in a coworking space, not a firm.
Should you become smaller and more efficient, or different and larger? Someone put it simply this week: if you only do today's work with AI, you become a more efficient and smaller company. The alternative is to use the freed capacity for work that wasn't previously economic. Deeper work. Roles that didn't make sense with old costs. Opposite conclusions.
The manager probably should restructure. Dorsey's logic works. But restructuring will be the exception until leaders reckon with what their organisations actually are.
In a firm I trained last quarter, one person took a useful approach. She identified a weekly synthesis that took three people a full day and rebuilt it as a collaborative workflow where the AI did the assembly and the team did the judgment. The meetings didn't go. People came with a shared foundation rather than spending their energy building one. She didn't fight gravity; she redesigned the orbit.
Every AI strategy is secretly an answer to something most leaders haven't asked out loud: to what extent is this a production system vs a human one? The leaders who get this right will stop fighting gravity and start using it. As Artemis II showed us this week, an orbit, after all, is not the absence of force. It is force put to work.
Three things worth knowing
1. Dorsey wants to replace your org chart with a world model.
Jack Dorsey's essay this week, "From Hierarchy to Intelligence," is the most concrete articulation yet of the case for AI-native organisational design. He traces 2,000 years of hierarchy, from Roman legions through the Prussian General Staff to the McKinsey matrix, and argues all of it exists to route information. He is reorganising around a "company world model": an AI that continuously understands the state of the whole business. The org flattens to three roles: individual contributors, time-boxed problem owners and player-coaches who build and develop people. Block's stock rose 17% after the restructuring announcement. I'm sure he's right about the technology. But the revealed preference of every organisation I've worked with suggests he's wrong about the humans.
2. Mollick says giving AI to IT is usually a mistake. The harder problem is they can't see who's using it.
Ethan Mollick's Economist column argues that the dominant corporate instinct, slotting AI into existing processes and handing it to IT, is a strategic mistake. Handing control to a department whose mission is risk elimination is a category error. AI demands the opposite. He also identifies a subtler problem: when companies get the incentives wrong, employees hide their AI use. Some fear punishment. Some don't trust that productivity gains will be shared. Some quietly work 90% less and say nothing. I know MANY such people. Managers can't see what's actually happening, which makes real strategy impossible. (Also: His argument that companies default to cutting 30% of the workforce rather than asking what becomes possible connects directly to the extraction-versus-expansion choice I wrote about in an earlier edition.)
3. Zapier just raised the floor for what "AI fluent" means.
Zapier released Version 2 of its AI Fluency Rubric, used for every hire across the company. The floor has moved. "Capable" now requires AI embedded in core workflows with repeatable systems, not one-off prompts. They assess trajectory ("slope"), not snapshots. They've added accountability as a fourth dimension alongside mindset, strategy and building. Managers must demonstrate team-wide adoption, not just personal fluency. In skills tests, they watch candidates prompt, push back on output and iterate in real time. A rough result with strong reasoning beats a polished one with no visible process. Wade Foster open-sourced V1 last year and hundreds of companies adopted it. V2 reflects how fast the baseline has shifted. If your organisation hasn't defined what "good" looks like for AI use, Zapier just gave you a starting point.
Try this
Before you build anything, have AI interview you first.
Don't describe what you want and ask AI to build it. Instead, ask the model to interview you: "I want to build X. Ask me every question you need answered before starting." It probes edge cases, surfaces assumptions and tightens scope before a line of work begins. Many ideas aren't as clear or well thought through as you think they are. Better to discover that in a five-minute conversation than a five-hour build.
Before you analyse anything, ask AI what looks weird.
Next time someone sends you a spreadsheet, a report or a set of financials, upload it to Claude or ChatGPT and ask: "Read this and tell me what looks unusual." I coached a finance professional this week who receives portfolio company P&Ls regularly. Before she even opens the numbers now, AI flags the things worth checking: an unusually high margin, a budget assumption that changed between the original and revised forecast, a line item that doesn't match the pattern. Five or six flags in thirty seconds. She still does the analysis. But she starts it knowing where best to look.
Don't build an app. Let AI be the app.
I coached a person this week who wanted to build a web application for an investment screening workflow. The build was simpler than they expected: rebuild the process as a repeatable skill inside an AI tool instead of building it as a standalone app. The AI itself becomes the application. The advantage is resilience: if the input format is wrong or a step fails, the AI adapts on the fly. A standalone app just stops. If you have a multi-step workflow you keep wishing someone would build software for, try describing it to your AI tool and asking it to turn the process into something you can re-run with one command.
What readers said
Last week's "The system and the surrender" drew 50 replies, the most substantive batch yet. One reader called AI "Google Maps for the brain: a few clicks, brain off, a turn here, a turn there, and suddenly you've driven into a muddy field." Another caught their AI making a confident arithmetic error and asked the question that keeps coming up: with a junior analyst you can give feedback and they improve. How do you hold an AI to account? A reader in government spent hours building what they called a "PROMPT COACH" that encodes institutional judgment for their role: the system and the surrender in one project. And a reader in the Middle East raised the apprenticeship question directly: if AI reduces the reps that juniors get, where do they learn critical thinking? I built something in response. Full, anonymised, reader feedback at online. See who's been engaged on a new community leaderboard (anonymised, of course! Email if you want YOUR rank :).