The Bottleneck in Enterprise AI Is Not the Technology

SM
Sarah McKenna10 min read

Our Ed Ortega sat down with Cisco's, SVP of AI Software and Platform, DJ Sampath for a frank, insightful and rich conversation about what's really getting in the way of enterprise AI adoption and success.

AI bridge

For most of the past decade, conversations about artificial intelligence inside large organisations tended to begin with capability. Could the models perform the task? Was the technology mature enough? Did the organisation have the infrastructure or the talent to build something meaningful with it?

Those questions made sense at the time. The tools were still evolving and the barrier to entry was high. AI projects required specialist teams, complex data pipelines and significant technical investment. For many organisations the question of adoption was genuinely constrained by the technology itself.

That constraint has largely disappeared.

Large language models, coding assistants and increasingly capable agents have lowered the cost of experimentation dramatically. Teams that would previously have waited months for machine learning support can now prototype ideas in days. Entire categories of cognitive work such as drafting, summarising, analysing and coding are now routinely assisted by systems that did not exist in practical form just a few years ago.

And yet, if you speak with the people responsible for deploying these systems across large enterprises, a different pattern emerges. The obstacle is rarely capability. It is trust.

DJ Sampath, who leads AI software at Cisco, describes the shift bluntly. The question organisations are grappling with is no longer whether AI can generate value. It is whether it can be used safely enough, visibly enough and predictably enough to justify integrating it deeply into the business.

In other words, the constraint has moved from technology to governance.

This is a familiar transition in the lifecycle of powerful technologies. In the early stages, innovation is limited by what the tools can do. Later, progress slows because organisations must figure out how to control them. The more powerful the capability, the more important the governance.

Artificial intelligence has now reached that second phase.

One reason the transition feels uncomfortable is that AI quietly breaks a long-standing assumption about enterprise software. Historically, software behaved in predictable ways. Developers wrote code, deployed it and, once it was in production, its behaviour remained largely stable until someone changed it deliberately.

AI systems behave differently. Increasingly they generate outputs dynamically, interpret context and choose actions based on probabilistic reasoning. Coding agents can propose new logic. Autonomous systems can chain together multiple tasks. The behaviour of the system is no longer entirely fixed at the moment of deployment.

For organisations used to deterministic software, this changes the nature of risk.

The questions that suddenly matter are not purely technical. They are organisational.

Which models are being used across the company?

Where are agents operating?

What data can they access?

Under whose identity are they acting?

What guardrails exist if behaviour deviates from expectation?

Without clear answers to these questions, even promising AI projects stall. Legal teams become cautious. Security teams slow deployments. Product leaders hesitate to scale experiments beyond small pilots.

The technology itself may be ready, but the organisation is not yet confident enough to rely on it.

This is why governance has become the central theme in enterprise AI discussions.

Sampath frames the operational challenge in three simple steps. First, organisations need visibility. They must know where AI workloads are running, which models are in use and how agents interact with systems and data. Without visibility, there is no meaningful oversight.

Second comes validation. Not every model is appropriate for every context. Some industries face regulatory restrictions on data flows or external models. Governance means ensuring the systems deployed inside the organisation meet security, compliance and operational standards.

Third is runtime protection. Even approved systems require guardrails. AI can behave unexpectedly, particularly when interacting with other systems. Organisations need mechanisms that monitor behaviour and intervene when something moves outside acceptable boundaries.

Cisco summarises the framework in three verbs that are deliberately practical: discover, detect and protect.

What makes the governance problem particularly complex is that it cannot be solved by a single function. AI sits at the intersection of disciplines that historically operated in parallel. Legal teams care about intellectual property and regulatory exposure. Security teams worry about data leakage. Engineers focus on reliability and performance. Product leaders focus on speed and user value.

In organisations deploying AI seriously, these groups are increasingly brought together into responsible AI committees or working groups. Their purpose is not simply to write policy. It is to develop a shared vocabulary for discussing risk.

Without that shared language, conversations about AI quickly become unproductive. Engineers hear exaggerated fears. Legal teams hear casual dismissal of risk. Product teams hear friction where they expected momentum.

The work of governance is therefore partly technical and partly cultural.

Culture, in fact, may be the more difficult element.

Technology adoption inside organisations has always depended on signals from leadership. When executives demonstrate curiosity and responsible experimentation, teams feel permission to explore. When leaders signal fear or hesitation, experimentation quietly slows.

Cisco’s internal approach has been to embed AI broadly across workflows. Engineers draft specifications with AI assistance. Sales teams analyse customer journeys using models. Customer support teams summarise large volumes of support tickets and identify patterns that previously took days to surface.

The philosophy is pragmatic. AI is treated as a collaborator rather than an autonomous decision-maker. As Sampath describes it, the goal is to ensure that humans remain in control. AI is managed by people. It does not manage them.

This distinction matters because much of the anxiety around AI adoption comes from imagining systems operating independently of human oversight. In reality, most enterprise deployments place humans firmly inside the loop, supervising outputs and providing feedback that improves performance over time.

The more immediate challenge is not runaway autonomy but organisational misalignment.

Recent research illustrates the point starkly. Surveys of enterprise AI adoption show a wide gap between how leaders perceive progress and how employees experience it. Executives often believe AI initiatives are well established and delivering results, while individual contributors report inconsistent access to tools, limited training and unclear guidance on when or how AI should be used. 

This perception gap matters because transformation programmes depend on shared understanding. When leadership believes progress is further advanced than it actually is, attention shifts prematurely to the next initiative. Meanwhile, the underlying adoption remains shallow.

The result is an illusion of transformation.

There is another dimension to the challenge that is only now becoming visible. As AI systems become more capable, they also become more demanding of the infrastructure around them. Agents generate constant interactions with data systems, APIs and other agents. Every query requires compute. Every output requires inference.

The scale of activity grows quickly.

Organisations that treat AI as an isolated tool will eventually discover that their underlying platforms are not prepared for the load. Systems that were designed primarily for human interaction suddenly need to support automated reasoning, continuous analysis and machine-driven workflows.

In effect, AI adoption becomes an architectural question.

Strong platforms create leverage by providing reusable capabilities that allow teams to build and deploy new functionality quickly. Weak foundations force teams to reinvent solutions repeatedly, slowing progress and increasing operational risk. 

Seen through this lens, the story of enterprise AI adoption becomes less about models and more about operating models.

The organisations moving fastest are not necessarily those with the most advanced technology. They are the ones that have invested in governance structures, cultural adoption and the underlying platforms required to support AI-driven work.

The ones that struggle often have access to the same models but lack the organisational clarity to use them confidently.

The irony is that the technology itself is improving at extraordinary speed. New models, tools and capabilities appear almost weekly. But organisational change still moves at human pace.

Policies must be written. Systems must be instrumented. Teams must learn new ways of working. Leadership must develop confidence in tools that behave differently from the software they are used to governing.

All of that takes time.

Which is why the real bottleneck in enterprise AI today is not the technology.

It is the organisation’s ability to trust it.



Not everyone reads
our newsletter.

Because it’s not for everyone. But if you’re interested in the cutting edge of AI transformation and product development, you could do worse than receiving our monthly missives.

We just need your deets:

THINGS THAT GO IN THE FOOTER

©copyright 2025 Machine & Folk

The Bottleneck in Enterprise AI Is Not the Technology | Machine & Folk