Enterprises are investing heavily in generative AI, but most fail to realise their ambitions. Research reveals that the main barriers are not technical, but the lack of robust processes, strong data foundations, and clear expectations about AI’s real capabilities.
In large organisations, AI adoption has moved from excitement to confronting operational challenges. The root cause of failure is not insufficient technology, but the struggle to implement, integrate, and measure transformation across real business processes and data flows.
Recent survey research by ABBYY clearly reflects this tension. While most organisations report some level of AI adoption, a significant proportion admit that training models, integrating them into workflows, and governing their use have proven far harder than expected. The contradiction is not about willingness or investment. It is about foundations.
According to Max Vermeir, Senior Director of AI Strategy at ABBYY, the issue is structural rather than technical. “Many organisations underestimated the speed of technological change, but more importantly, they underestimated the complexity of integrating AI into real business environments,” he says. “Training generative models is only one part of the challenge. Introducing advanced technology into broken or outdated processes simply does not work, and AI tends to expose those weaknesses rather than compensate for them.”
Furthermore, cultural elements, such as legacy norms and entrenched work practices, can silently resist integration efforts. Organisational silos and a tendency to avoid blame further complicate the transition, impeding the seamless adoption of AI solutions. That observation reframes the true debate over enterprise AI: The core issue is not that AI is overpromised, but that it is being applied to operating models never designed to support it.
Why AI strategy keeps breaking down
A key research finding is that many organisations have AI strategies that lack specific, operational definitions of success, focusing on vague adoption metrics instead of measurable process or business impact.
For Vermeir, this reflects a persistent technology-first mindset. “There is no AI without process intelligence,” he continues. “A measurable AI strategy must start with understanding and mapping processes across the business. If leaders begin with the technology, they end up measuring the wrong things. Success should be defined by accuracy, compliance, speed, or business impact, not by how many AI tools are deployed.”
This distinction shapes everything that follows. When AI success is measured solely by experimentation, inefficiency is tolerated. When measured by outcomes, assumptions are quickly challenged.
“AI implementations need to be continuously monitored and refined to ensure they are delivering a return,” Vermeir explains. “If an AI tool stops being useful, it needs to be rethought. AI is not something you deploy once and move on from. It must justify its place over time.”
The survey reinforces this point. Organisations that tied AI initiatives to specific operational metrics reported greater confidence in outcomes and a clearer understanding of value. Those who did not often struggled to move beyond pilot stages.
Slowing down to move faster
The rush to adopt AI continues, driven by competition and market noise. Vermeir argues this urgency is often counterproductive. Instead of hastily adding AI tools without a clear strategy, businesses should adopt a strategic pacing approach. He illustrates this by contrasting the hurried timeline with a more paced approach. A hurried timeline may lead to quicker implementation but often brings long-term inefficiencies and integration issues. On the other hand, a paced approach, where time is taken to understand processes and ensure proper integration, can result in faster realization of net value by avoiding costly mistakes and ensuring proper alignment of AI tools with business needs.
This makes process intelligence foundational. “Process AI provides visibility into how a business runs,” Vermeir adds. “It helps identify bottlenecks and inefficiencies, so technologies chosen genuinely improve performance, not add complexity to an unstable baseline.”
According to the research organisations that combined generative AI with process intelligence and document AI reported higher accuracy, better compliance, and stronger trust in outputs. The pattern is consistent across industries. “Generative AI on its own is rarely enough,” Vermeir says. “It needs structure and context. Without that, even the most advanced models struggle to deliver consistent value.” This is not about resisting innovation. It is about sequencing it correctly.
Data quality is a leadership problem
Data quality is the biggest barrier to AI maturity in the survey. Vermeir says poor data reflects operational and cultural issues: “Operationally, it complicates training and integration,” he says. “Culturally, it shows a tendency to treat AI as a quick fix instead of a long-term capability needing care and governance.”
This mindset leads to predictable disappointment. Expecting AI to fix fragmented systems or weak data governance only guarantees failure. “AI amplifies what exists,” Vermeir adds. “If your data is inconsistent, AI will only scale the problem.”
At the same time, the rise of generative AI forces organisations to rethink the value of unstructured data. “All organisations sit on vast amounts of unstructured data, contracts, emails, documents, and records, which are often treated as a liability,” Vermeir notes. “When you combine document AI, process AI, and agentic AI with generative models, that data becomes a strategic asset rather than an operational burden.”
Document AI turns raw text into structured data. Process AI reveals how this data flows across the organisation, exposing breaks and opportunities for automation. “GenAI initiatives fail not due to weak models, but from lack of visibility into processes and data flows,” Vermeir says. “Without this, AI operates blindly.”
Architecture matters more than algorithms
As enterprise AI matures, attention shifts to information architecture. “The biggest bottleneck is not compute or algorithms, but enterprise information architecture,” Vermeir explains. “Most organisations weren’t designed for AI-driven decisions and operate with siloed, inconsistent systems.
“This misalignment limits even the most advanced models. Document and process AI, clean and organise data and workflows. They provide the foundation generative models need to deliver insights over noise.”
This foundation also becomes critical as organisations experiment with more autonomous systems. “When agentic systems begin making decisions about next steps in a process, the level of responsibility increases dramatically,” Vermeir says. “Continuous monitoring becomes non-negotiable. If you cannot prove what an AI system did and why, you expose yourself to serious compliance and governance risks.”
In summary, trust in AI is built on transparent procedures; decisions must be traceable, explainable, and auditable. This is a critical takeaway for any organisation deploying AI systems.
People still sit at the center
One of the survey’s more uncomfortable findings is how many employees feel excluded from their organisation’s AI journey. Adoption often happens at the leadership level, while skills lag across the workforce. To shift this dynamic, positioning employees as co-designers rather than mere passengers can be transformative. By asking questions like, ‘Where can frontline insight improve the model?’ organisations can foster a feedback loop that counters the narrative of exclusion and reinforces shared ownership of AI initiatives.
“Bridging the skills gap is essential,” Vermeir says. “AI evolves daily, new models, capabilities, and risks. Without training, employees cannot keep up, leading to exclusion. This is not about workforce reduction. It’s about reskilling and redeployment. When employees are empowered, AI becomes a collaborative transformation, not just a top-down push.”
That collaboration becomes increasingly important as explainability, and accountability move from regulatory concerns to competitive differentiators. “When organisations allow AI systems to act autonomously, explainability becomes non-negotiable,” Vermeir says. “It is not just a safeguard. It builds trust with customers, regulators, and employees.”
Looking ahead, Vermeir expects many organisations to continue misallocating investment. “Most over-invest in generative AI tools and under-invest in the foundations: process intelligence, data quality, and skills,” he says. “No technology is a silver bullet. You cannot layer new tools onto broken processes and expect transformation.”
To illustrate this misallocation and catalyse action, enterprises should consider a ‘stop-start-continue’ framework: stop indiscriminate investment in AI tools without foundational support, start prioritizing process intelligence and data quality, and continue enhancing workforce skills. This approach will better anchor AI’s role in achieving business transformation.
The future of enterprise AI will hinge on organisations’ discipline in strengthening the fundamental processes and data needed to realise AI’s promised impact. “The next decade of productivity will not be about working faster,” Vermeir says. “It will be about working smarter with AI, making better decisions, streamlining processes, and scaling outcomes across the organisation.”
When asked to distil a single principle separating AI leaders from laggards, his answer is direct. “Technology for the sake of technology is never the answer,” he concludes. “Everything must start with a clear understanding of your processes and bottlenecks. That is where AI delivers value. There is no AI without process intelligence.”




