SEB POTTER

Technology × Humanity × Question Everything

The people driving your AI transformation don't have permission.

DATE: 2026-03-15 | 7 MIN READ
▶ LISTEN TO THIS RANT

There’s a version of the AI transformation conversation that plays out in boardrooms and strategy decks, and it usually sounds like this: we need an AI strategy, we need a roadmap, we need to understand the implications, we need a committee. Six months later, the committee has produced a report. Nobody’s workflow has changed.

Meanwhile, somewhere in your organisation, someone has already figured it out.

They’re not waiting for permission. They’re using AI to do the work of three people. They’ve rebuilt their personal workflow so thoroughly that going back to the old way would feel like switching from a car to a horse. They’re probably doing this quietly, because the organisational defaults make it difficult to do openly, and they’ve learned that asking for permission is slower than asking for forgiveness.

These people are your AI transformation. You just haven’t noticed them yet.

The productivity gap is already inside your building

This is the part that most AI strategies miss entirely. They frame AI adoption as something that needs to be pushed into the organisation from above. A training programme. A tool rollout. A centre of excellence. All of which can be useful, but none of which addresses the actual dynamic, which is that adoption is already happening unevenly and informally.

The gap is not between your company and the market. It is between the people inside your company who have already changed how they work and the people who haven’t. And that gap is widening fast, because the people who are already using AI effectively are getting better at it every week, while the people who aren’t are still waiting for someone to tell them what to do.

If you lead a team or a business, you need to find the people on the advanced side of that gap. They are your most valuable asset in this transition, and right now you are almost certainly wasting them.

What these people look like

They are not necessarily in engineering. They might be, but they might also be in product, design, marketing, operations, or client services. The common thread is not technical skill. It is curiosity and agency. They tried the tools. They stuck with them past the initial frustration. They iterated on how they work until the AI became load-bearing rather than novelty.

You can usually spot them by output. They’re producing more than seems reasonable. They’re finishing things faster than the process says they should. They might be slightly evasive about exactly how they’re working, because they’ve learned that “I used AI for that” sometimes triggers a conversation they’d rather not have.

That evasiveness should worry you, because it means your organisation is creating an incentive to hide the most productive behaviour rather than spread it.

The instinct to restrict is understandable and wrong

Most organisations, when they notice AI being used informally, react by tightening controls. This is understandable. There are real risks around data privacy, intellectual property, security, and compliance. A blanket policy feels safer than selective permissiveness.

But here’s the problem: a blanket restriction doesn’t just stop the risky behaviour. It stops the productive behaviour. And the people you’re restricting are often the ones who understand the risks better than the people writing the policy, because they’ve been living with the tools long enough to know where the boundaries are.

The result is a policy that protects the organisation from a theoretical risk while guaranteeing a real cost: the people who could be driving transformation are either working around your policy quietly (which is worse than no policy at all) or giving up and directing their energy elsewhere. Possibly towards a side project. Possibly towards a job search.

Risk perimeters, not blanket rules

The better approach is to think in terms of risk perimeters rather than blanket permissions. Instead of saying “you can use AI for these specific approved tasks,” say “here are the areas where AI use is not permitted, and everywhere else, you have broad permission to change how you work.”

That inversion matters. The first framing creates a whitelist that is always out of date, always too narrow, and always requires someone to approve new uses. The second creates a blacklist of genuinely sensitive areas (client data, personally identifiable information, security-critical systems, whatever your organisation needs to protect) and gives people maximal autonomy everywhere else.

Define the boundaries. Be specific about what is out of bounds and why. Then get out of the way.

This requires trust. It requires accepting that you cannot control every detail of how people use these tools. But it also reflects reality, because you can’t control it anyway. You can only choose between trust with boundaries and the illusion of control.

What happens when you get this right

When you give these individuals space to work the way they’ve already been working, something interesting happens. They stop hiding it. They start showing their colleagues what’s possible. Not through formal training or change management programmes, but through the most powerful mechanism available: visible results.

When someone on a team is consistently delivering more, faster, and at a quality level that others can see, it creates pull rather than push. Their colleagues start asking “how are you doing that?” rather than being told “you should try this tool.” The enthusiasm spreads because it’s grounded in demonstrated capability, not a vendor demo or a leadership initiative.

This is how real transformation happens in organisations. Not top-down, not bottom-up, but through lateral influence from people who are already proving what’s possible.

The uncomfortable bit

There’s a harder truth underneath all of this. The people who have already transformed their personal workflow are also the people who are most likely to leave.

They know what they can do. They know their market value has changed. They can see the gap between how they work and how the organisation around them works, and that gap is frustrating. If your organisation’s response to their capabilities is to restrict them, slow them down, or ignore what they’re achieving, they will eventually find somewhere that doesn’t.

The retention argument for AI adoption is underappreciated. The best thing you can do to keep your most adaptive people is to let them be adaptive. Give them the boundaries they need to be responsible, and then let them run.

What to actually do

Find them. You probably already know who they are. If you don’t, look for the people who are producing more than their role seems to allow.

Talk to them. Ask what they’re doing, how they’re doing it, and what’s getting in their way. Do this with genuine curiosity, not audit energy.

Define your risk perimeters. Be explicit about what is genuinely off limits and why. Make the list as short as you responsibly can.

Give them broad permission. Not for specific tools or specific tasks, but for changing how they work within those boundaries.

Make the results visible. Let them present to their teams. Let them show what changed. Let the enthusiasm be organic rather than manufactured.

Then get out of their way.

Your AI transformation is not something you need to go and find externally. It’s already inside your organisation, being done by people who figured it out on their own. Your job is to notice them, support them, and let the rest of the organisation see what they’ve been doing.

Just let them go out and make something.