Why most AI agents will fail to deliver on their promises

Share this article

Agentic AI is being sold as the future of enterprise automation, but that future may arrive slower and messier than its advocates suggest. According to new data from Gartner, over 40 per cent of agentic AI projects are projected to be cancelled by the end of 2027, plagued by spiralling costs, murky business cases and insufficient risk controls.

These systems, often described as autonomous software agents capable of making decisions and acting on them, have been caught in a perfect storm of hype and premature deployment. “Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” Anushree Verma, Senior Director Analyst at Gartner, said. She warns that companies risk stalling their AI ambitions by underestimating the cost and complexity of deploying agents at scale.

A Gartner poll of over 3,400 executives earlier this year found that only 19 per cent had made significant investments in agentic AI, while 31 per cent were still waiting to assess its viability. That caution may be well-placed: despite the media’s enthusiasm, most so-called AI agents today are little more than rebranded chatbots, RPA scripts or digital assistants with limited autonomy. Gartner estimates that just 130 of the thousands of companies claiming to offer agentic AI have genuine, differentiated capabilities.

The return of decision-centric AI

Beneath the disappointment, however, is a genuine shift in how AI could reshape business decision-making. Gartner predicts that by 2028, agentic AI will make at least 15 per cent of daily enterprise decisions autonomously, up from zero today. Moreover, one-third of enterprise applications will embed agentic capabilities within the same timeframe.

But progress will depend not on marketing terms or technical bravado, but on a reorientation of priorities. Rather than asking what to do with their data, businesses need to begin with clearer questions: What decisions matter? What outcomes are we targeting? And how can AI contribute to that process with transparency, consistency and accountability?

Aden Hopkins, CEO of XpertRule, argues that a decision-centric approach is essential. “This misalignment highlights a broader issue: a fundamental lack of understanding around AI’s practical value in enterprise settings,” he said. “Many businesses begin with the wrong question… Instead, they should ask, ‘What decisions do we need to make and what outcomes do we want?’”

Hopkins believes that the risks of “polished dishonesty”, the ability of sophisticated AI systems to present convincing but incorrect outputs, are underestimated. In high-stakes, regulated environments, that risk becomes not just technical, but commercial and reputational. “The greatest risk is in mission-critical tasks. This is where failure is not an option and transparency, auditability and explainability is more than a requirement, it’s imperative,” he said.

Building trust before scaling ambition

One proposed route through the uncertainty is Decision Intelligence (DI), which integrates human expertise into AI systems at both design and deployment stages. Hopkins explains that this keeps humans “in the loop” for complex or context-sensitive decisions while allowing repeatable actions to be automated. The goal is not to replace human judgment but to scale it, building systems that are both more consistent and more trustworthy.

Gartner’s guidance echoes that caution. Agentic AI should only be used where it demonstrably improves productivity, and companies should be wary of retrofitting legacy systems without rethinking workflows. The ideal implementation, it suggests, builds from the ground up with measurable goals in mind.

The promise of AI agents lies not in their autonomy, but in their alignment. If businesses can prioritise clarity over novelty, and transparency over complexity, agentic AI may yet fulfil its potential. But in a market thick with ‘agent washing’ and inflated expectations, separating noise from value will be essential. Otherwise, the very tools designed to help us make better decisions may become liabilities no organisation can afford.

Related Posts
Others have also viewed
Into The madverse podcast

Episode 27: Why trusting one AI model is the biggest risk

In this episode of Into The Madverse, Mark Venables speaks with Matt Penton, Director of ...

Identity is becoming the weakest link as autonomous systems spread

As artificial intelligence moves from experimentation into everyday enterprise operations, a familiar security assumption is ...

Why streaming data is becoming the weakest link

As artificial intelligence becomes embedded across business operations, organisations are discovering that the hardest part ...

Schneider Electric reshapes its data centre leadership for the AI era

Britain’s data centre sector is entering a decisive phase. Artificial intelligence is driving unprecedented demand ...