Embedding intelligence where it matters most

Share this article

AI in enterprise operations is shifting from experimentation to embedded execution, transforming how decisions are made across critical workflows. The organisations making real progress are those that integrate AI directly into their processes, infrastructure, and governance models.

AI has reached an inflexion point in the enterprise, but not in the way the headlines would have us believe. While large language models dominate the narrative and personal productivity tools grab the spotlight, the fundamental transformation is happening deeper in the stack. It is taking place at the process level, where intelligent systems are being embedded into workflows, applications, and decisions that matter. Here, AI is not a shiny new interface but an operational backbone.

This is not about co-pilots that help you summarise emails. It is about AI that determines if a vulnerable adult has failed to make their morning tea or whether a cancer patient is likely to miss a critical appointment. It is about augmenting judgment, not replacing it. And it is here, in this complex and highly sensitive terrain, that the CIO must now lead.

The CIO and the AI imperative

The CIO is no longer a gatekeeper of IT infrastructure. That era has passed. Today, the role is inherently strategic, with AI now squarely within its remit. The push for AI adoption often begins at the top, driven by board-level pressure to leverage AI. But as Richard Farrell, Chief Innovation Officer at Netcall, warns, this top-down enthusiasm can backfire without the right foundations in place.

“There has been a sort of top-down push to CIOs from the CEO often to implement some AI,” he says. “To some extent, that is the wrong way around. You need to look at people, processes and technology in that order. If you start with the technology first, and then try and find a process for it, and then find some people to use it, you will end up with some failures.”

The result is all too familiar: a burst of pilot projects that never scale, use cases detached from outcomes, and an operational model unfit for real-world deployment. AI, Farrell insists, must be grounded in a clear objective. Smart organisations begin with outcomes, invest in robust governance, build scalable infrastructure, and nurture a culture of experimentation—without gambling everything on a single moonshot.

When failure is not an option

In domains such as healthcare and local government, experimentation cannot come at the expense of accuracy. Netcall’s customers include NHS Trusts using machine learning to predict patient attendance and regional councils providing AI-guided citizen services. These are not environments where the ‘fail-fast’ mantra applies. The margin for error is razor-thin.

“We cannot put a disclaimer on something and say, ‘This might be inaccurate’,” Farrell adds. “This might be a cancer diagnosis. You cannot mess around with that. You cannot inform someone of the incorrect application process for a Blue Badge. It must be right. These are not the moments to move fast and break things.”

This is where platform thinking becomes critical. It is not just about the power of an AI model but also about where it resides, how it integrates, and what governance surrounds it. Farrell describes a platform-driven future where low-code environments, robotic process automation, and AI engines are not bolted together but built from the same core architecture. It is the only way, he believes, to deliver intelligence at scale without losing trust or control.

Low code as the new engine room

Much of Netcall’s vision is rooted in the convergence of low-code development and artificial intelligence (AI). This is not just about speeding up application delivery. It is about embedding intelligence directly into business processes without needing an army of data scientists to wire it all together. “The low-code application is aware of what AI models you have,” Farrell continues. “It is just a dropdown. You do not have to go off to a separate platform, use an API key, or worry about the syntax. The model is trained on your data within your environment and remains within the UK. That means sovereignty, transparency, and performance monitoring are all covered.”

The company’s acquisition of a process modelling business has given it access to a library of 20,000 real-world business processes, from recruitment to claims handling, which now inform its AI training and application generation. The result is not just faster development but smarter automation, where AI enhances both the process and the outcome.

Commoditisation and the application layer

Farrell is clear-eyed about the direction of travel. Large language models are becoming commoditised. What matters now is not who owns the model but how it is used. The real value, he argues, lies not in the intelligence itself but in its orchestration. “Having an LLM or access to an LLM is not a differentiator,” Farrell explains. “The value is how can you embed that in applications, so you get the power, but you do not have to worry about all of the training, data science, ethics and drift. Those risks are eliminated because it is embedded.”

He points to the growing power of models like LLaMA 4, with its ten million character context window, as enabling hyper-personalisation without sacrificing data privacy. But none of that matters, he argues, if the model is not tightly integrated into the platform and governed accordingly.

Data quality and the illusion of completeness

When it comes to AI infrastructure, Farrell believes the most overlooked issue is not compute or latency but rather data completeness. The illusion of comprehensive datasets often masks a deeper problem: the critical context lost in human interactions. “Organisations may have lovely ERP and CRM systems, but they miss face-to-face engagements, phone calls, internal conversations, whole swathes of nuance that never get captured,” Farrell says. “That gap can make the difference between a good decision and a dangerous one.”

In sectors such as adult social care, where sensors track motion and humidity to infer wellbeing, Farrell explains that the deluge of data must be sifted for meaning. However, in areas such as claims, complaints, or patient engagement, the challenge is the opposite: extracting insights from under-documented, high-emotion touchpoints.

Trust, governance and ethical scaffolding

Without a solid ethical and governance framework, AI cannot scale. Farrell has seen too many examples where privacy and safety were bolted on after deployment rather than built in from the start. One case he cites involves a mainstream tool that took desktop screenshots every few seconds, storing the results in plain text by default.

“That is not privacy by design,” Farrell adds. “You cannot add ethics afterwards. You need to start with that framework, especially in high-risk sectors like healthcare, education, and government. It should not be a black box. We should be able to answer questions like: where is the training data coming from? What are the key features? What are the fail-safes?”

Fortunately, public sector procurement is beginning to reflect this reality. Farrell points to the UK government’s data ethics guidelines as a practical framework for implementation, which is increasingly being adopted in AI tenders. They provide the right level of scrutiny and a welcome push toward standardised transparency.

AI is not a silver bullet

If there is one misconception Farrell is determined to challenge, it is the belief that AI can fix fundamentally broken organisations. The most dangerous fallacy, he argues, is the idea that technology alone can deliver transformation. “Some of the most obvious use cases for GenAI, a chatbot here, a co-pilot there, risk misleading people,” he says. £Summarising meeting minutes is not transformation. It does not clear waiting lists or resolve insurance claims. Embedding AI into process and application layers, that is where the real value lies.”

It is here that the CIO must play a central role, not only in choosing the right tools but in knowing where they belong. In Farrell’s view, the line between automation and human oversight is drawn at the point of consequence. When mistakes impact lives, rights, or wellbeing, humans must be informed.

The path to AI agency

Over the next five years, Farrell expects a shift towards genuine AI agencies at the enterprise level. This will not be about giving every employee a personal AI assistant but rather about enabling applications to make autonomous decisions across multi-step processes.

“We will see more agency at the process level, where systems decide what to do, which data to use, and how to integrate,” Farrell concludes. “That will close the gap between platforms and outcomes. It will not be a quick win, but it will be a meaningful one.”

The challenge, as ever, is not just to build something intelligent but to make it useful, trusted, and embedded where it matters. AI is not a destination. It is a capability. And it belongs not in pilots or portals but in the places where outcomes are made and accountability lives.

Related Posts
Others have also viewed

The next frontier of start-up acceleration lies in the AI tech stack

The rise of generative and agentic AI has redefined what it means to start a ...

Quantum-centric supercomputing will redefine the AI stack

Executives building for the next decade face an awkward truth, the biggest AI breakthrough may ...

The invisible barrier that could decide the future of artificial intelligence

As AI workloads grow denser and data centres reach physical limits, the real bottleneck in ...
Into The madverse podcast

Episode 21: The Ethics Engine Inside AI

Philosopher-turned-AI leader Filippo explores why building AI that can work is not the same as ...