We’re seeing AI implementations beginning to move out of the realm of pilots, prototypes and proof-of-concepts. 

Across industries, organisations are moving from experimentation to enterprise rollout. What was once a curiosity in innovation teams is becoming part of core operations, customer experiences, and decision making.

This shift is transformative, but it is also fraught with risk. The transition from experiment to production is where AI initiatives can stumble. Technologies (or approaches) that looked promising in controlled environments struggle to deliver at scale; users resist adoption; costs rise; and momentum stalls. 

The reasons for this often have less to do with the algorithms themselves than with the teams that build, deploy, and sustain them.

During the experimentation phase, AI projects are usually contained within small, motivated groups. These teams were there from the start, gaining buy-in early and shaping the system to their own needs. 

They understand its quirks and limitations, and they are invested in making it succeed. In this environment, questions of observability, explainability, or long-term maintenance can feel less pressing. The system ‘works’ because the people around it share a common understanding.

Rollouts change the equation. Suddenly, the technology is being used by a far broader and more diverse audience; people with different roles, expectations, and levels of comfort with AI. Some may welcome it, others may distrust it, and still others may see it as a potential threat. 

At this point, success depends as much on engagement and communication as it does on technical capability. The system must not only perform but also be trusted, explainable, and maintainable. It must evolve as feedback arrives and as the organisation itself changes.

That is why continuity in teams is so critical. Moving from prototype to production demands groups who carry forward the context of the experiment, remain accountable for its rollout, and stay long enough to adapt it to reality. 

Research on high-performing teams, such as that outlined in Team Topologies, highlights the value of stability: teams that remain intact develop deeper trust, shared knowledge, and the confidence to take bigger risks. As Anna Shipman of the Financial Times has observed, when teams know they will be around to see the results of their decisions, they feel able to place bigger bets.

The trouble is that traditional consultancy models are not designed with continuity in mind. Typically, a consultancy is brought in to deliver a solution, and once the engagement ends, the team departs and takes with it valuable context and expertise. For short, discrete projects, that can work well. But AI is not a ‘deliverable’ in the same way a system migration or compliance project might be. 

It is iterative, dynamic, and deeply dependent on the people who have lived with it from the start. When that knowledge walks out the door, organisations are left to rebuild which often comes at significant cost.

This then raises a bigger question for the industry: what role should consultancies play in an era when AI adoption depends so heavily on long-lived, high-performing teams? In my view, clients still need outside expertise and fresh perspectives. 

They still need partners who can accelerate delivery and inject specialist knowledge. But they also need continuity, accountability, and the option to retain hard-won expertise in-house.

One answer is to reimagine consultancy not as a revolving door of talent but as a bridge to sustainable capability. At Counter, we’ve made this approach the very core of our business by assembling UK-based technical teams who work directly inside client organisations. Our teams don’t simply get assigned, they choose to work with each client, ensuring genuine buy-in and alignment. 

Most importantly, at the end of an engagement, clients can retain any of the associate consultants as their own employees, at no additional cost. This ensures that the people who conceived the experiment, navigated the rollout, and understand the system’s nuances can remain, forming the kind of long-lived teams that AI demands.

Why business’s biggest AI blind spot might be the human touch

This is not the only model available, but it reflects a shift that feels increasingly necessary. As AI becomes mission-critical, organisations may no longer accept consultancy arrangements that leave them without continuity or with knowledge gaps that must be rebuilt. They will demand models that combine external expertise with lasting capability, ensuring they can sustain momentum long after the initial project is complete.

The story of AI adoption is often told as one of technology. But in practice, its success depends on the people and teams who build it, maintain it, and guide organisations through the cultural shifts it brings. Experiments can be done in isolation, but enterprise rollouts require trust, stability, and the ability to adapt over time. 

The consultancies that recognise this, and evolve their models accordingly, will be the ones best placed to help organisations turn prototypes into production success.

How AI is a game-changer for understanding people