Why AI needs a reality check before it can reshape your business

Share this article

Executives are being sold the idea that AI will transform everything from decision-making to profitability. However, the real challenge is not adoption but discernment.

Artificial intelligence is evolving faster than regulation, infrastructure and sometimes even common sense. From prescription checking in healthcare to retrieval-augmented generation (RAG) in enterprise systems, the tools are becoming more capable. But the temptation to deploy without understanding is growing just as fast.

The early promise of AI was automation on a large scale. But the reality, according to Dr Clare Walsh, Director of Education at the Institute of Analytics, has been far more nuanced. “We imagined ten years ago that we would be automating everything,” she explains. “In fact, where we found the most gains have been in enhancing and improving. It is not replacing humans; it is enhancing and supporting them. And I think healthcare really has nailed that understanding of where AI can help.”

Her example is sharp and specific: up to 20,000 preventable deaths a year in the NHS due to prescription errors. A new large language model is being trialled to act as a third line of defence, scanning prescriptions for inconsistencies and interactions too complex for overburdened clinicians to catch. “That is going to be amazing if that succeeds. That is a lot of people alive today,” she says.

This, Walsh argues, is where AI excels, not by replacing human judgement but by acting as a silent partner that enhances vigilance, insight and safety. The same principles apply outside of healthcare, wherever decisions are high-stakes and data-rich.

Think beyond the model hype

Many organisations, Walsh believes, still misunderstand the scope of AI. “AI is this very ephemeral word that its definition changes as the technology evolves,” she says. “For us, the

current generation of AIs, it is advanced data analytics, and that is fundamentally where it begins.”

This misunderstanding creates problems. Companies focus too heavily on generative tools without integrating foundational techniques, such as clustering or predictive modelling, that are more closely aligned with their business needs. Walsh cautions against defaulting to AI when simpler or more effective methods exist. “My favourite example is when Facebook tried complex language models to identify foreign-sponsored political content,” she adds. “Eventually, they just required a physical name and address to post political ads. That worked far better. You must consider all your options. AI is one of them, but not necessarily the right one.”

That principle also applies when choosing between black-box and white-box AI. With regulatory scrutiny increasing, especially under the European AI Act, many businesses are pulling back from opaque models in favour of transparent alternatives. “There has been a bit of a pullback,” Walsh notes. “But at the end of last year, we started to see models that could report back what they were thinking about, which is revolutionary… It basically means we can understand it, we can control it a bit more.”

Bias, ethics and the human firewall

The core challenge in AI is not technical; it is behavioural. Walsh is clear: “It is not AI per se that is biased; it is the model sets themselves and how they are developed,” she says. The issue stems from legacy datasets that encoded past prejudices. In clinical trials, for example, women were often excluded due to hormonal fluctuations or pregnancy risks. “The result is that today we have got no data on how women’s bodies react to drugs,” she adds. “Health data is chronically biased.”

These biases are amplified by the very nature of machine learning, which is designed to identify and exploit patterns, even if those patterns are discriminatory in nature. “You cannot ever get rid of it entirely,” Walsh says. “Eventually, we will find someone who is disadvantaged by every single process. We literally cannot remove it. What we must do as companies is to take steps to identify the bias.”

That includes testing models on internal datasets, demanding transparency from vendors, and deploying only where human oversight can provide accountability. “AIs can spot health and safety hazards, but legally, they cannot be responsible,” Walsh adds. “The human has to decide.”

Overconfidence is a bigger risk than failure

The accessibility of tools like ChatGPT creates a false sense of confidence, especially among non-specialists. “You are at risk of becoming the worst kind of data scientist,” Walsh warns. “They can very convincingly be horribly wrong. They have no idea when they are clueless.”

She gives a stark example. Even a model with 99.999 per cent accuracy may fail catastrophically if it misses one in 10,000 cases, precisely the kind of edge-case risk involved in fraud detection or equipment failure. “Sometimes 99.999 per cent accurate just means, well, we missed the fraud,” she says.

For enterprises, the danger is that confidence in the interface, slick outputs, and human-like text disguises the unreliability of the underlying model. This is where training, internal policy, and continuous monitoring become essential. “You need to have a plan for monitoring it,” Walsh continues. “And if it does go wrong, and it will, you must have a feedback loop built in. That time is not wasted if future projects can learn from your mistakes.”

Scaling AI means scaling trust

Sustainability is another blind spot in the rush to scale. Few businesses understand the energy demands of AI inference, especially from large language models. Each prompt processed requires immense compute resources, often across water-cooled server farms powered by fossil fuels. “We estimate it is around 50 times as much electricity needed to run a question through a large language model than it would be to run it through search,” Walsh says. “The carbon dioxide production is truly alarming.”

Yet AI is now embedded in many standard tools. From Zoom to Microsoft, background features like notetaking or AI search are becoming the norm. And most enterprises are unaware of the environmental trade-offs. “It is going to impact your ability to meet your net zero targets,” she warns. “We are reaching the limits of how much electricity countries like the US can get their hands on to keep feeding them.”

Until mandatory carbon reporting is introduced for AI providers, the onus remains on buyers to ask difficult questions. “At the moment, it has not quite found its business model,” she says, “but that may change very soon.”

The skills gap is the real adoption gap

While the tools advance, the workforce lags behind. AI adoption has been far slower than many predicted because human readiness remains the bottleneck. “I think adoption is always going to be slow,” Walsh says. “The technology is invented, and there is a big delay before people embrace it. That delay is a good thing. It buys people time to adapt.”

But time is running out. As AI becomes embedded in every business function, from logistics to marketing, the ability to collaborate with intelligent systems is becoming as essential as email once was. “It is like when we had to become digitally competent because communications went digital,” Walsh argues. “Now decision-making is going digital.”

And while user-friendly interfaces may appear to democratise AI, she sees that as a dangerous myth. “It is great that people can use it without coding skills. But you still need the expertise of someone who understands how it is working,” Walsh says. “Otherwise, you just get beautifully written nonsense, and you will not know it is nonsense.”

Her advice to small businesses is to start small, focusing on tedious but valuable tasks such as data cleaning, notetaking, internal search, and social content drafting, and then build confidence before scaling. “Look at the boring tasks. Do not look at the shiny stuff,” she says.

The next frontier is explainable intelligence

With transformer-based large language models hitting diminishing returns, the next wave of innovation will focus on context-aware systems that can explain their thinking and adapt to specific scenarios. “We started to see models that could report back what they were thinking about, which is revolutionary,” Walsh says. “It basically means we can understand it; we can control it a bit more.”

That shift from black box to interpretable models is not just about compliance. It is about building trust in systems that are increasingly woven into decision-making, from procurement to hiring to customer service. As the sector matures, so must the leadership approach. AI is no longer the exclusive domain of technical specialists. Every executive now has a role to play in asking better questions, demanding better explanations, and thinking harder about the invisible trade-offs. “The next ten years are going to be critical,” Walsh concludes. “And now is the easiest time to get started.”

Related Posts
Others have also viewed

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...

The AI hangover is real, and the hard work is only just starting

The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations ...