The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations with impressive pilots and little operational change. As reality replaces excitement, the industry is discovering that scaling AI is less a technology challenge than a test of leadership, organisational design and execution discipline.
Early AI programmes are stalling for familiar reasons. Generative AI enabled easy experimentation but also encouraged mistaking busyness for progress. Organisations now face shelves of proofs of concept that failed to drive operational change.
This isn’t just hype meeting reality. It’s weak business anchoring, fragile foundations, and excluded humans. AI didn’t fail. Execution did. The risk now is that enterprises misdiagnose the problem and downgrade AI from priority to distraction.
“It is a common theme right now,” says Dr Michael Eiden, Managing Director at Alvarez and Marsal. “To an extent, it is expected because generative AI has lowered the bar with experimentation. It is a good thing to get your hands dirty. But if the use case is not anchored in the business case and not anchored in the right technology foundations, you struggle to scale once you want to productionise it.”
What begins as a promising demo often collapses under the weight of real-world expectations. The first shock is rarely a technical failure but a measurement. “Companies struggle with constant measurement of the algorithm,” Eiden says. “Something looks nice and fancy as a proof of concept, but once you put it into production, you want to measure adoption, uptake and the upside the system generates. Often, organisations do not think about it. Then the business starts asking how we quantify the impact, fatigue sets in, and initiatives stall.”
The second shock is organisational. AI is treated as an IT project, not a business capability, so nobody owns the messy parts. “We still see cases where senior management pursues AI for its own sake,” Eiden says. “It becomes just another IT project, not a core capability. Without the right team or incentives for end users, it just gathers dust.”
The pattern is troubling because it mirrors past digital transformations. But, Eiden says, this time the market won’t wait. “The pressure and disruption are different. Future profitability depends on preparation. The stakes are higher now. You can’t ignore this.”
Why pilots keep failing
When a CEO says, “We tried AI, and it didn’t work,” Eiden’s instinct is to slow down and diagnose, not challenge the ambition.
“First of all, you are still in good company,” he says. “You need to learn from what did not go well. You need to understand where things went wrong. Was it the foundation layer, was it adoption, or was the algorithm misdesigned? If you learn from it, it is a step in the right direction.”
This framing is key: AI should not be dismissed as just another failed experiment, nor should pilots be regarded as mere symbols of innovation. The critical takeaway is that each pilot must be evaluated as a genuine test of the organisation’s ability to implement change.
Eiden sees three recurring failure modes. None of them is exotic. The first is poor alignment between the algorithm and the real business problem. “If the algorithm is not close enough to the business case you have to solve in the wild, that is a problem,” he says. “A pilot can look great on static data, but it does not solve what the business actually needs.”
The second is technical fragility under operational pressure. “A concept may work on static data, but not scale with fast data streams. If architecture is wrong, it fails in production.”
The third is the most underestimated and often the most fatal, humans. “If the end user was an afterthought, then you might have a nice tool,” he says. “But people do not want to use it, because they were not part of the process, or they think it threatens them.”
Choosing the right pilot is not a technical but a behavioural decision. The key is to start with pilots that are visible and simple enough to build confidence quickly. As Eiden says, “People need to see it and believe it in the wild, not in a demo. If you aim too high and it takes too long, people lose patience.”
There is a moment, he says, when momentum becomes self-reinforcing. It is not a dashboard or a KPI. It is emotional. “It is magical when you see it,” he says. “Leadership realises we now have insights we would not have been able to generate before. It hits very deep. If everything is lined up, clients want more of what they have seen and start pushing the momentum themselves.”
That belief gap is a major constraint. Some executives approach AI as an obligation, not an opportunity. “It relates to subject-matter familiarity,” Eiden says. “Execs who take basic AI education grasp the tech’s impact. The closer they are, the more powerful the first win.”
Education isn’t about turning leaders into engineers. It’s about ending the myth that AI is too mysterious to govern. “AI is a meta technology,” he says. “Everyone needs a basic grasp of its capability.”
From automation to decision intelligence
One reason so many AI initiatives stall is that enterprises frame AI primarily as automation. They want faster workflows, but wonder why outcomes do not change. Eiden is pushing a different value category. Decision augmentation.
His preferred instrument is causal AI, models built to explain why things happen, not just what correlates. “It is not on many executives’ radar,” he says. “Causal technologies uncover root causes, making them sharper tools for mission-critical decisions. They distinguish correlation from causality.”
His example: marketing spend. “ROI on marketing is traditionally a black box. Correlation finds drivers, but causal tech explains what drives outcomes, with no endless A-B tests needed.”
The appeal isn’t just accuracy, but explainability. “They’re explainable,” he says. “That’s why financial services and pharma are interested. You can’t audit a huge language model, but you can audit these.”
Causal AI demands a new organisational discipline. It begins with business self-understanding. “You gather experts to create a mental model of the system, then model it,” he says. “In classical machine learning, you throw data at a problem and hope the model converges. This is the opposite.”
Eiden believes causal methods will grow in prominence despite currently being overshadowed by agentic AI hype. “This year, everyone talks about agents,” he says. “Next year, people will realise automation is useful, but sometimes smarter decisions from complex data are needed. Language models cannot do this well, but causal methods can.”
He is not dismissive of agents, but sceptical of their maturity for high-stakes workflows. “LLMs using tools have value,” he says. “But for precision and observability, we are not there yet. In some processes, failure is not an option. Errors propagate. There is also a cost issue, some systems burn through tokens very quickly.”
Despite the focus on models, Eiden sees data readiness as the real limiter. “True readiness is being able to access mission-critical data programmatically and with low latency,” he says. “Most organisations have fragmented landscapes; structured and unstructured data remain siloed.”
Buying a platform doesn’t ensure data readiness. “If you have unstructured data, you need architectures where all data is available regardless of shape and language,” he says. “Otherwise, you are building on sand.”
Asked to choose between great infrastructure and poor data, or strong data and weak infrastructure, Eiden does not hesitate. “Definitely the second,” he says. “If there’s no signal in the data, stop. If there is, you can leapfrog. Garbage in, garbage out still applies.”
He sees more companies moving AI workloads on-premise. “Many find it more cost-effective and value control. Sovereign AI, geopolitical risk, and vendor dependence are now board concerns.”
Governance, often seen as an innovation blocker, looks different to Eiden. “With strong MLOps, responsible AI isn’t dramatic. Model repositories, observability, traceability. It is effort, but it’s good practice.”
If Eiden wants one change in the coming years, it’s not a new model or platform but a better shared language within organisations. “We lump too many things into the term AI,” Eiden says. “It creates fuzziness. I hope we become more nuanced and more knowledgeable. That will make planning and deployment easier.”
His closing advice is less about tactics than mindset. “We are in an exponential phase,” he says. “Disruption is everywhere. Do not think linearly about what AI can do. An exponential mindset should be at the front of mind, in urgency, and in how quickly things are changing around us.”



