As AI evolves from reactive chatbots to autonomous agents, organisations are beginning to rewire how work is delegated, orchestrated and delivered. The shift is creating measurable gains in productivity while raising more profound questions about infrastructure, responsibility and long-term competitiveness.
The promise of artificial intelligence has always hinted at something more than automation. For decades, it was talked about in the language of assistance and augmentation to amplify human productivity or improve decision-making. But with the emergence of generative and now agentic AI, that promise is rapidly crystallising into something tangible, something operational.
Alan Flower, Executive Vice President and Global Head of AI and Cloud Native Labs at HCLTech, has a unique vantage point over this unfolding shift. His perspective is shaped not by abstract theorising but by direct deployment across sectors and functions. Through this lens, part practitioner, part strategist, he sees AI’s present inflexion point not simply as a technical breakthrough but as a structural redefinition of how organisations work.
Moving beyond conversations
The generative wave of late 2022 and 2023 was remarkable for its velocity. Within weeks of OpenAI’s initial ChatGPT release, enterprises were experimenting. Within months, vendors were repackaging products around large language models. Within a year, chatbots and virtual assistants were ubiquitous. For many organisations, it felt like progress. But for Flower, this early adoption would always be a staging point.
“What we are now seeing is a transition away from commoditised, conversational AI into what we call agentic AI,” he explains. “This is where the nature of interaction changes. You are no longer asking the AI a question but instructing it to perform a task, an outcome, not an answer.”
This distinction is critical. Generative models began as linguistic interfaces. Their primary output was text. Agentic AI represents the next phase: systems that can reason, plan and execute tasks alone or in collaboration with other AI agents. Where a chatbot might help an employee find a document, an agent can autonomously summarise its contents, extract insights, populate a dashboard, and trigger workflows. “It is the point at which AI shifts from being an assistant to becoming an actor,” Flower adds. “It no longer waits to be told what to do next. It determines what should be done.”
The technical implications are substantial, but so are the cultural ones. Delegation to software is not new. However, giving an autonomous system the latitude to determine its own methods and orchestration logic introduces new expectations around performance, interpretability, and oversight.
Embedded agents in enterprise workflows
This evolution is not speculative. HCLTech is already using agentic systems in production. One of the most visible examples is within the global IT service desks the company manages for Fortune 2000 clients. “IT service desks are often overwhelmed,” Flower explains. Every malfunction, failed update, and access issue generates a ticket. Those tickets must be triaged, investigated, and resolved. And that volume only increases as digital estates grow.”
HCLTech has developed a portfolio of AI agents designed to replicate the roles of network engineers, cloud engineers and support staff. These agents now operate side-by-side with human analysts, handling tickets autonomously. “Engineers can now delegate tasks to their AI counterparts,” Flower continues. “The agent will interpret the ticket, diagnose the issue, and apply the fix, be it a reconfiguration, a patch or a permission update. Once resolved, the agent closes the ticket.”
The effect is twofold: a measurable increase in service desk throughput and a qualitative improvement in employee focus. Staff can redirect their attention to higher-order challenges, complex troubleshooting, or strategic system improvements. But Flower is careful to frame this as augmentation, not displacement. “We are not removing humans from the loop,” he says. “We are optimising the loop. The people on these desks now spend more time doing the work they were trained for.”
This principle of embedded augmentation is carried into other sectors. HCLTech has developed an AI Clinical Advisor in healthcare that supports doctors during consultations. It synthesises the latest research, cross-references patient data, and suggests treatment options, all in real time. At the same time, it automates administrative tasks, from updating health records to scheduling follow-ups.
“In the UK, a GP appointment lasts around eight minutes, if you are lucky,” Flower continues. “The system is under strain. Our Clinical Advisor gives clinicians a kind of digital second brain that reads the research, checks the guidelines and handles the paperwork.” A primary healthcare provider is now deploying this system globally. Its significance lies in integrating seamlessly with clinical workflows rather than replacing them. The value lies not just in the technology but in its deployment context.
From experimentation to integration
The shift from experimentation to integration is consistent across Flower’s observations. For him, generative AI has moved decisively beyond pilots and proof-of-concepts. What matters now is execution at scale. “Over the last nine months to a year, we have seen a clear transition,” he says. “Organisations are no longer asking what is possible. They are asking how fast they can get to production. That is because the technology has become more accessible, affordable, and consumable.”
This is where commoditisation plays a subtle role. Flower is not dismissive of it; he sees it as a feature. “The bar has been lowered,” he explains. “You do not need to be a hyperscaler or a digital-native to apply AI to your processes. Any organisation of any size can now develop an AI-enhanced solution.” The differentiation now lies in integration. AI is most effective not when deployed as a separate initiative but when embedded invisibly into core workflows. This is where forward-thinking enterprises are finding strategic advantage.
“Companies like Rockwell Automation are not building AI on the side,” he says. “They are weaving it into their existing systems. That is where you get real impact when AI is part of how a system thinks, not something added after the fact.” At HCLTech, this approach has led to the development of AIForce, a platform designed to apply AI across the entire software engineering lifecycle. Unlike narrow co-pilot tools focusing on code generation, AIForce addresses the full spectrum of engineering tasks: requirements gathering, documentation, testing, deployment and maintenance.
“Modern developers spend only a third of their time writing code,” Flower says. “The rest is spent navigating complex processes. We are applying AI to all of it, to every role in the engineering team, not just the developer.” This philosophy of system-wide augmentation, rather than point enhancement, reflects a more mature view of AI’s role. It becomes a layer of intelligence within enterprise infrastructure, not a standalone capability.
AI infrastructure and the hybrid future
If the front end of AI is becoming more intuitive, the back end is becoming more complex. Infrastructure is increasingly the hidden battleground. While public cloud remains dominant, Flower sees a marked shift toward hybrid and multi-cloud environments, driven not by ideology but by data gravity. “AI workloads need to run close to the data,” he explains. “That is the most obvious and immovable constraint. You can only move so much data to the cloud before the economics or latency makes it unfeasible. So we are seeing a resurgence in on-premise workloads, GPU-as-a-service models, and specialised bare metal deployments.”
This hybrid future creates new challenges. Clients may be locked into long-term public cloud contracts yet need the flexibility to deploy workloads elsewhere. Managing this complexity, technically and commercially, is now a key enterprise capability. HCLTech’s infrastructure practice includes managing some of the world’s largest data centre estates, including commercial operations and the private infrastructure of leading technology firms.
“Clients are increasingly looking to customise models to reflect their specific business processes and data,” Flower says. “This is driving demand for small language models, finely tuned versions of large models optimised for a given domain.” These models are often trained in temporary, high-performance environments. “GPU-as-a-service providers are doing well here,” Flower continues. “Clients can rent capacity to fine-tune a model, then deploy it on-premise. This gives them control without permanent capital expenditure.”
The result is a fragmented, dynamic compute landscape. For organisations, the key is not uniformity but portability, moving workloads to where they make the most sense, whether for performance, cost or regulatory compliance.
Trust as the ultimate differentiator
Amid all this technical progress, one theme returns consistently: trust. For Flower, it is the defining challenge and opportunity of the next phase. “There is no responsible AI without trustworthiness,” he explains. “You must know that your AI system produces accurate, reliable outcomes. You need to know that it complies with legislation. And most importantly, you need to be able to prove those things to others.”
Responsible AI is more than governance. It is an operational requirement and a strategic asset. Clients increasingly want assurance that their systems are effective but also safe, fair and interpretable. This is particularly true in regulated sectors such as healthcare, finance and critical infrastructure. “If I can prove that my AI-enabled solution is trustworthy, then that becomes a competitive advantage,” Flower concludes. “It is not just about meeting a compliance requirement but about giving users confidence. That trust is something you can trade on.”
HCLTech has embedded these principles into its deployment frameworks, offering clients structured methodologies to assess, monitor, and audit AI performance over time. The goal is not only to meet emerging regulatory standards but also to define best practices in environments where those standards are still evolving.
For Flower, this alignment between performance and responsibility is the natural culmination of AI’s current trajectory. The journey from chatbots to agents, from augmentation to orchestration, from experimentation to trust, is now fully underway. For enterprises navigating this landscape, the question is no longer when to start but how to ensure their AI future is intelligent by design and trustworthy by default.