The infrastructure that once existed to support human workflows is being rewritten to support machine ones. As AI agents begin to plan, execute and iterate independently, the challenge for enterprises is no longer adoption, but whether their operating models can survive the transition.
There has been a quiet but consequential shift in how artificial intelligence is being deployed across enterprise systems. For much of the past three years, organisations have treated generative AI as an augmentation layer, something that could be added to existing workflows to improve efficiency or enhance decision-making. The assumption underpinning that approach was that AI would adapt to the organisation. The latest data suggests the inverse is now beginning to take hold. According to the 2026 State of AI Agents report from Databricks, AI agents are no longer behaving as passive tools but as active systems that reshape the environments in which they operate.
The contradiction at the heart of the report is difficult to ignore. While 67 per cent of organisations are already using AI-powered tools, only 19 per cent have deployed AI agents in any meaningful way, and even those deployments remain limited in scope. This gap does not reflect a lack of interest or investment. It reflects a deeper uncertainty about how these systems should be integrated, governed and scaled. Organisations are no longer deciding whether to use AI. They are struggling to understand what kind of system they are introducing into their operations.
What differentiates the current phase from earlier waves of AI adoption is the emergence of agentic architectures. These systems do not simply respond to prompts or generate outputs on demand. They are designed to plan, reason and execute multi-step workflows, often interacting with multiple tools and data sources without continuous human intervention. The report makes clear that enterprises are moving beyond isolated applications such as chatbots and copilots towards systems capable of orchestrating entire processes. This is not a refinement of existing workflows. It is a redefinition of how those workflows are constructed.
From tools to operating models
The most revealing signal of this transition is the rapid rise of multi-agent systems. Enterprises are no longer deploying single-purpose tools but are instead building coordinated environments in which specialised agents collaborate to complete complex tasks. The growth rate alone is striking, with usage increasing by 327 per cent in just four months. What matters more than the speed, however, is what these systems represent. They begin to resemble an operating model rather than a collection of tools, with supervisory agents directing tasks, coordinating actions and integrating outputs across different domains.
This shift becomes more tangible when examining how organisations are applying AI in production environments. The report shows that most use cases are not speculative or experimental but focused on automating routine, necessary tasks such as customer support, onboarding, predictive maintenance and regulatory reporting. Around 40 per cent of these use cases are directly tied to customer experience, indicating a clear alignment between AI deployment and measurable business outcomes. What emerges is a pragmatic approach in which organisations are not attempting to reinvent their entire operation at once but are instead embedding agents into specific areas where they can deliver immediate value.
The deeper implication is that organisations are beginning to externalise coordination. Where management layers once existed to route information, assign tasks and ensure consistency, agentic systems now perform those functions programmatically. The consequence is not simply faster execution, but a compression of organisational structure, where fewer human intermediaries are required to achieve the same outcomes.
Infrastructure rewritten for machines
Beneath this application layer, a more profound transformation is taking place. The infrastructure that supports enterprise systems is being reshaped at a pace that few organisations are fully accounting for. The report highlights a dramatic shift in database operations, with AI agents now responsible for creating 80 per cent of databases, a figure that has risen from almost zero in just two years. Even more striking is the extent to which agents have taken over the creation of database branches, with 97 per cent now being generated autonomously.
These figures point to more than efficiency gains. They signal a transfer of operational control from humans to machines. Database management has historically been characterised by deliberate planning, controlled provisioning and predictable workloads. Agent-driven systems operate under entirely different assumptions. They generate continuous, high-frequency operations, creating and discarding environments programmatically, and executing complex queries as part of iterative reasoning processes.
In response, a new class of infrastructure is emerging, built specifically for the demands of agentic systems. These environments prioritise concurrency, programmability and real-time responsiveness, enabling agents to operate at a scale and speed that would be unmanageable through manual intervention. This is not simply an upgrade to existing systems. It represents a shift in the fundamental design principles of enterprise infrastructure, where the primary users are no longer human operators but autonomous systems.
Running alongside this shift is the rise of vibe coding, where users describe outcomes in natural language and allow AI to generate the underlying code. This has led to the emergence of citizen developers capable of building functional applications without traditional engineering expertise. More than 50,000 data and AI applications have already been created, growing at a rate of 250 per cent in six months. The result is a rapid expansion in the number of systems being built, placing additional pressure on the infrastructure layer to scale accordingly.
Complex ecosystems and real time execution
The growing complexity of these environments is reflected in the move towards multi-model strategies. The report shows that 78 per cent of organisations now use two or more large language model families, with a rising share using three or more. This approach allows organisations to select models based on performance and cost, tailoring them to specific tasks, but it also introduces additional layers of complexity in system design and integration.
At the same time, the shift towards real-time processing is redefining how AI is embedded within workflows. The report indicates that 96 per cent of AI requests are now handled in real time, reflecting the growing demand for immediate, interactive responses. This is not simply a technical preference. It reflects a move towards systems that operate continuously within the flow of work, influencing decisions as they are made rather than after the fact.
The combination of multi-model architectures and real-time execution creates an environment that is both powerful and difficult to manage. Systems are no longer static or predictable. They are dynamic, distributed and constantly evolving, requiring new approaches to coordination and oversight.
From experimentation to controlled autonomy
Despite these advances, a persistent challenge remains in moving AI from experimentation to production. The report highlights that as many as 95 per cent of generative AI pilots fail to reach real-world deployment, underscoring the difficulty of operationalising these systems at scale. The issue is not simply technical capability, but the complexity and risk associated with deploying autonomous systems across enterprise environments.
What distinguishes organisations that succeed is their investment in governance and evaluation frameworks. The adoption of governance tools has increased sevenfold in recent months, reflecting the urgency of managing the risks associated with agentic AI. The impact of these investments is significant, with organisations using governance frameworks able to move twelve times more AI projects into production, and those using evaluation tools achieving nearly six times the rate of deployment.
This reframes governance as an enabler rather than a constraint. In the context of agentic systems, governance provides the structure necessary to manage autonomy, ensuring that systems operate within defined parameters while still allowing for continuous adaptation. Evaluation frameworks, in turn, create feedback loops that enable systems to improve over time, transforming them from static tools into dynamic, learning systems.
The trajectory that emerges from the report is not one of gradual evolution but of systemic transition. AI agents are moving from the periphery to the centre of enterprise operations, reshaping how work is executed, how systems are built and how decisions are made. The pace of change is uneven, but the direction is clear.
The defining characteristic of this new phase is not intelligence alone, but autonomy at scale. Systems are being built that do not wait for human intervention in the way their predecessors did. They operate continuously, adapt dynamically and increasingly, act independently.
The organisations that succeed will not be those that deploy the most agents, but those that understand how to integrate them into a coherent operating model. This is no longer a question of adoption, but of alignment, between systems that are increasingly autonomous and structures that were designed for human control.
The constraint is shifting. It is no longer the capability of the technology, but the ability of the organisation to accommodate it. Because the systems now emerging are not waiting to be managed.




