The next evolution of IT operations will not be driven by scripts, dashboards, or rule engines but by agentic AI. This new architecture promises to radically transform how enterprises manage complexity, optimise performance, and move from reactive firefighting to self-directed systems.
The distinction between automation and autonomy is often lost in discussions about AI in enterprise IT. For Arunava Bag, Chief Technology Officer (EMEA) at Digitate, that difference could not be more significant. “The core idea behind agentic AI is that you give a machine a task, and it does not just execute a script, it investigates, orchestrates, takes decisions and acts,” he explains. “That used to be called self-healing, and ten years ago, the very idea caused shock and awe, especially in regulated industries. However, that fear is now fading. Machines taking autonomous action is becoming more palatable.”
This evolution has not happened overnight. In the early years, AI was confined mainly to deterministic workflows, following predictable scripts based on predefined inputs. However, a gradual shift has occurred, driven by two key developments. The first is a growing acceptance of responsible and explainable AI. The second, more recent, is the mainstreaming of generative AI. “Once generative AI became popular, people started to trust machines to do more,” Bag adds. “The leap from generating text to autonomously triaging IT incidents no longer seems so vast.”
The critical difference now lies in orchestration. Agentic AI integrates various decision points, tools, and models into an intelligent workflow that can learn from the past and act in the present. In practical terms, if a website crashes, the agent does not simply trigger a restart. It understands the context, queries infrastructure logs, analyses application performance, and isolates the root cause, then either resolves it or flags the necessary action with human-readable justification.
From triage to transformation
While agentic AI is gaining traction in triage and diagnosis, the path to full autonomy remains a work in progress. “The real shift we are seeing is in incident and problem management,” Bag continues. “When a major outage happens, historically, twenty engineers get on a call, spend hours pointing fingers, and eventually trace the fault. With agentic systems, triaging is becoming faster and more accurate.”
He points to advances in causal AI and temporal analysis as key enablers. These systems do not just respond to issues; they analyse time-series data, correlate upstream signals, and surface probable root causes. “For recurring incidents, we can now move towards eliminating the underlying issue rather than fixing symptoms,” he explains. “That is the real value, proactive resolution, not just reactive response.”
The limitations are equally clear. Most enterprises still keep the final act of resolution under human control, especially for high-risk systems. But Bag believes this will change. “In the next two to five years, we will see machines take over the entire incident lifecycle, provided they can explain their actions and work within policy-defined tollgates. That transparency is key.”
Despite widespread interest, Bag is clear-eyed about where AI has disrupted IT most effectively and where it has not. “Automation has been transformed,” he says. “Writing automation scripts and running them at scale is now easier and faster. The right-hand side of DevOps, everything in operations, is where AI has landed well.”
The left-hand side, however, remains resistant. Requirement gathering, user story refinement, and architectural decisions still rely heavily on human interpretation. “That part of the pipeline, design and planning, is still a shadowy area,” he adds. “AI will get there, but not yet.”
He also highlights the persistent cultural inertia in many organisations. “Most enterprises are still reactive,” he continues. “CIOs tell me they have 20 monitoring tools, yet they only discover an issue after a customer complains. They are managing through dashboards instead of enabling intelligent response.”
Proactive intelligence in action
A truly proactive AI-enabled operation, Bag argues, depends on three interlocking capabilities: observability, forecasting, and root cause elimination. “Modern observability platforms allow you to see your systems holistically. But that is only the first step,” he explains. “The second is signature-based forecasting. If Event A happens, and it usually leads to Event B within 30 minutes, the system can alert you to take preventive action.”
The third, more powerful capability is what he refers to as ticketless incident management. By applying Bayesian analysis and causal inference, the system can identify recurring patterns and root causes without needing structured tickets. That changes the game. These innovations do not just reduce the mean time to resolution. They allow infrastructure and operations teams to reframe their work around prevention rather than recovery. More importantly, they create a foundation for aligning IT with customer experience.
Enterprise IT often exists in isolation from the customer journey it supports. Bag sees AI as the key to closing that gap. “When the back-end systems become more reliable, the customer-facing applications are naturally more resilient,” he says. “But there is more to it than uptime.” He describes the emergence of AI-driven synthetic monitoring and real-user analytics, which track how customers interact with applications in real time. “AI can now interpret user journeys, spot friction points, and even correlate user drop-offs with specific infrastructure issues. That kind of end-to-end visibility is powerful.”
He also sees promise in sentiment analysis and behavioural tracking, not for marketing optimisation, but for IT performance tuning. “If you know where users are exiting the application, you can look at the system performance around those points,” he notes. “AI brings that feedback loop into the heart of IT operations.”
Scaling AI requires more than infrastructure
Despite the sophistication of modern AI, deployment at scale remains a bottleneck, mainly due to infrastructure constraints and data readiness issues. “People forget that AI models, especially large ones, are expensive to run,” Bag says. “If you want to use them securely and keep data in-house, you need serious compute. That is not trivial.”
He points to new developments in efficient model tuning and runtime optimisation as essential for unlocking value. “The cost of LLMs is coming down,” he adds. “Techniques like parameter-efficient tuning and domain-specific compression will make AI more accessible. But right now, infrastructure scaling is still a challenge.”
That challenge is not only technical. For many enterprises, the barrier is a lack of clarity in data strategy. Before you even think about models, you need to secure your data, categorise it, and separate what is necessary for generative AI versus machine learning. One mistake with sensitive data can derail your whole programme.
Traditional IT KPIs, such as noise reduction, uptime, and response time, are no longer sufficient. To demonstrate the actual value of AI, Bag believes operations must align with business outcomes. “If your loan application system goes down, the real question is not whether the server was at 95 per cent uptime,” he says. “It is whether you lost revenue or customers.”
He outlines a framework called vertical observability, linking business KPIs to application and infrastructure metrics. “You start with the business goal, say, the number of loans processed, and trace that through the application performance and infrastructure reliability. Only then can you show how IT contributes to business success.”
Equally important is horizontal observability, which involves tracing individual transactions across multiple systems and identifying where they stall. When a business user reports that their transaction is stuck, the system should already know and know where. That is what true AI-powered observability looks like.
Designing IT from a blank slate
If given the chance to rebuild enterprise IT from scratch, Bag would start with three principles. “First, move to cloud wherever possible,” he says. “Infrastructure management is not your core business. Second, shift to software-as-a-service. You should be focused on outcomes, not ownership. And third, embed AI into every layer of that architecture. We need to let go of practices built around firefighting. Agentic AI is not just a technology upgrade. It is a mindset shift, from reacting to predicting, from solving problems to preventing them.”
He is optimistic that organisations are ready to make the leap. “We have already moved past the fear,” he concludes. “Now the focus is on guardrails, governance, and transparency. But the architecture is there. The agents are ready. The next two to five years will define who leads and who falls behind.”
The era of dashboards and rulebooks is fading. In its place, agentic AI is emerging as the intelligent nervous system of enterprise IT. It promises to shift operations from fragmented workflows to self-directed ecosystems capable of learning, reasoning, and acting with purpose.
The challenge now is not whether AI can be trusted to act; rather, it is whether it can be trusted to act. It is whether enterprises are prepared to let go of old assumptions and allow the agents to lead.




