The year AI grows up

Share this article

We are late with this piece, but perhaps that is fitting. The loudest AI predictions are always made too early, before reality has time to push back. What 2026 is shaping up to be is not the year of smarter machines, but the year organisations finally confront the uncomfortable operational consequences of deploying them at scale.

There is a quiet consensus emerging across technology leaders that the AI conversation is shifting away from capability and towards consequence. The early phase of generative AI was dominated by novelty, velocity and spectacle. Models became bigger, demos became flashier, and proof-of-concept pilots multiplied across every industry. What is now becoming clear is that the real bottlenecks are no longer algorithmic. They are structural, organisational and infrastructural. AI is no longer constrained by what it can do, but by what enterprises are capable of absorbing.

That tension runs through nearly every serious forecast for 2026. The technology is not slowing down, but belief is giving way to realism. Organisations are discovering that autonomous systems force uncomfortable questions about governance, skills, infrastructure, identity, sovereignty and risk. The next phase of AI is not about acceleration, it is about survival in production.

Organisational reality catches up with AI

Jonathan Kahan, co-founder of Quartz Labs, argues that by 2026 the real bottleneck will no longer be technological at all, but structural. “With 95% of AI pilots failing, the C-suite is entering 2026 with growing AI fatigue. But the issue is not the technology. It is organisational. Outdated habits, slow approvals and legacy decision-making models are blocking what AI now makes possible.”

He believes the problem runs deeper than tooling choices, questioning whether current interaction models even make sense. “So far, most AI use has defaulted to a chat interface. But is chat the right medium for complex decisions, collaboration or creative work? Should AI behave as a tool, a co-worker, an assistant that moves across applications or something embedded directly into every workflow? And once you answer that, a second question emerges: how should teams work with AI together?”

For Kahan, 2026 marks a shift away from acquisition towards organisational redesign. “Do organisations let every employee use AI as they see fit? Do they augment existing workflows? Or do they redesign the operating model entirely? Next year will not be about buying more AI, but about building a business that can actually use it.”

He also sees AI evolving from a set of personal tools into something more systemic. “In 2026, AI stops acting as a collection of isolated assistants and begins operating as a shared intelligence across the enterprise. Most employees will use AI to extend their thinking, but the real breakthrough comes when these individual interactions feed a collective decision layer that benefits the whole organisation.”

Without that shared context, he warns, “knowledge becomes fragmented, and thousands of personal AI helpers pull the company in different directions.” Instead, AI increasingly functions as infrastructure, holding “organisational memory and decision standards in one place,” reducing rework and accelerating alignment across teams.

Human-agent teaming moves from theory to operating model

One of the clearest shifts for 2026 is the move from AI as a tool to AI as a co-worker. The language of copilots is already beginning to feel inadequate. What organisations are experimenting with are multi-agent systems, autonomous workflows and decision systems that operate alongside human staff, rather than merely supporting them.

Steven Webb, UK Chief Technology Officer at Capgemini, frames this as a fundamental change in how work itself is structured. “It’s becoming abundantly clear that UK consumers value services that blend human empathy with the speed and efficiency of AI,” he says. “That expectation is pushing organisations to explore new operating models where people and autonomous agents work side by side. Over the next year, businesses will pour effort into ironing out the fundamentals, from defining which tasks should be delegated to agents, to tackling practical questions such as how to charge for agentic workloads and measure their performance.”

What Webb is describing is not an interface problem, but an organisational one. If AI agents are performing real work, then enterprises must decide where accountability sits, how decisions are audited, and what human oversight means in practice. The technical challenge is solvable. The cultural one is not.

“Unlocking human-AI chemistry will be high on the priority list,” Webb continues. “To get there safely, businesses will need controlled environments where they can test, tune and validate agent behaviours before putting them into production. Sandbox initiatives such as the UK government’s AI Growth Lab will become crucial. They’ll allow organisations to stress-test autonomous agents and develop the human–machine collaboration patterns that will underpin this next wave of productivity.”

This emphasis on experimentation with guardrails rather than uncontrolled deployment reflects a growing recognition that agentic AI is not just another software upgrade. It changes the social architecture of organisations, redistributing authority, expertise and responsibility in ways that most governance structures were never designed to handle.

Large action models and the limits of autonomy

The practical reality of agentic AI is still far messier than vendor narratives suggest. While consumer-facing tools create the impression of seamless intelligence, enterprise systems remain brittle, expensive and unpredictable when pushed beyond narrow tasks.

Tim Ensor, General Manager of Intelligence Services at Cambridge Consultants, highlights this gap between expectation and operational reality. “Large action models are often another way of talking about Agentic AI,” he adds. “When we talk about using large language models combined with other capabilities like tools, memory and data, this is when we start getting into the whole field of Agentic AI. The current state I would say is deploying these systems in varying business use cases. The focus is on accelerating software development, and then in general business planning.”

Ensor’s caution is not about capability, but about context. Agentic systems perform best in environments where ambiguity exists, but where the cost of error remains tolerable.

“These large action models are much better at coping with a higher degree of ambiguity,” he adds. “But one of the bigger challenges the industry faces is the balance between productivity and safety. The regulations for how we treat these new ranges of robotics and autonomous systems are still being worked through.”

This tension becomes acute in physical environments, where AI systems interact directly with people, infrastructure and assets. The fantasy of autonomous enterprise gives way to the reality of liability, compliance and operational risk.

The infrastructure reckoning

If agentic AI exposes organisational fragility, infrastructure exposes technical fragility. Many enterprises are discovering that they are building intelligent systems on foundations that were never designed for real-time, autonomous workloads.

Chintan Patel, UKI Chief Technology Officer at Cisco, describes this as a collision between old and new forms of technical debt. “A quiet but critical conflict hides beneath the glittering promise of the AI revolution: the collision of legacy technical debt with emerging AI infrastructure debt,” he says. “In the race to deploy AI, many organisations are stacking quick fixes and scattered data on top of ageing infrastructure. The result is a growing liability, smart systems running on foundations that were never built for today’s speed, scale or security demands.”

The danger, Patel argues, is not that AI fails, but that it succeeds too quickly for the underlying systems to cope. “The year ahead will be defined by those who modernise their fundamental network infrastructure,” he adds. “Prioritising a secure-by-design overhaul today will do more than pay off the debts of yesterday. It will build the resilient, AI-ready backbone to power a safer, faster, transformative future.”

This reframes AI strategy as an infrastructure programme rather than an application roadmap. Without resilient networks, identity layers and data architectures, autonomy becomes operationally unsustainable.

AI moves to the edge

Nowhere is this more visible than in the shift toward edge AI. The next generation of models will not be trained on scraped text, but on telemetry from physical systems. “Data fuels AI, but we’ve barely started to access what truly exists,” Patel explains. “With 22.4 billion IoT devices generating more than 90 zettabytes a year, 2026 will see organisations finally tap into the vast well of telemetry, machine, IoT and IIoT data.”

This transition pushes AI out of the cloud and into factories, energy systems, logistics networks and healthcare infrastructure. “AI can analyse and combine these streams in ways humans can’t, by training domain-specific models that could reshape industries as dramatically as generative AI did,” Patel continues. “To enable these models, 2026 will bring a shift toward AI at the Edge, preserving privacy, critical in industrial environments.”

The implication is profound. AI governance can no longer sit purely at application level. It must operate inside networks, devices and physical systems where failure carries real-world consequences.

Digital sovereignty becomes operational

As AI systems become more embedded, questions of control and jurisdiction move from policy to engineering. “In 2026, I expect digital sovereignty to become a defining strategic priority for UK organisations,” Webb argues. “No longer just a policy concept, it’s already a prominent topic in the conversations we’re having across government and highly regulated industries here.”

Cisco’s Patel sees the same trend globally. “Digital sovereignty won’t slow innovation. It will redefine where and how it happens, shifting from theory to execution in 2026, as tighter data-localisation laws take hold. Nations and blocs will assert control over their infrastructure, data and technology stacks, reshaping the digital landscape.”

Rather than full isolation, sovereignty becomes a risk management strategy. “Demand for sovereign cloud solutions will rise, along with greater reliance on regional providers and renewed interest in on-premises or air-gapped data centres,” Patel continues. “A full overhaul of global infrastructure is unlikely, but selective migrations and diversified cloud strategies will become the norm.”

The result is a new paradox, as Webb describes it, where sovereignty is defined not by independence, but by resilient interdependence across controlled ecosystems.

Identity becomes the new perimeter

Perhaps the most underappreciated consequence of agentic AI is the collapse of traditional security models. When autonomous agents perform tasks, change roles and interact with systems dynamically, static identity frameworks break down.

“Deepfakes, transparency gaps, bias and accountability issues have made trust a prerequisite for AI adoption,” Patel explains. “Critical systems need protections that scale with distributed workloads and a blended human–digital workforce. With AI agents acting independently, identity becomes the core control mechanism. With 82 per cent of EMEA organisations planning to deploy AI agents, identity management will become a defining trend. And with AI agents shifting roles instantly, traditional identity systems won’t cut it.

“This forces organisations to rethink governance at a fundamental level. As the line between humans and AI agents blurs, organisations must govern the human–agent pair: who’s in charge, what they can access, how is their behaviour monitored, and what happens when things go wrong.”

In this world, security is no longer about protecting systems from external attackers. It is about controlling autonomous behaviour from within.

The workforce infrastructure problem

For all the focus on skills, most leaders now recognise that AI disruption is not primarily about job loss, but about the collapse of existing talent models. “2026 will force organisations to face up to a core existential question: what does talent look like in a world where AI can perform many traditional safe roles?” Webb asks. “This isn’t a failure of people, it’s a failure of workforce infrastructure.”

Patel echoes this view. “Companies race to deploy agents, yet their human systems, hiring, career paths, skills development, remain rooted in a pre-AI era,” he says. “This isn’t a failure of people; it’s a failure of workforce infrastructure.”

The solution is not basic AI literacy, but systemic redesign. “Companies will need a full-stack curriculum spanning the whole career ladder,” Patel continues. “From networking and cybersecurity fundamentals to data science, vibe coding and advanced AI capabilities.”

The winners, it seems, will not be those with the best models, but those capable of rebuilding human systems around them.

The bubble and the backlash

Not all forecasts are optimistic. Some see 2026 as a moment of correction rather than consolidation. Martin Brock, Chief Technology Officer at Cambridge Consultants, describes an industry approaching saturation. “AI is at a point of climax, characterised by expectations and mounting unsustainability,” he says. “This bubble is likely to burst as the hype and reality get untangled to a significant extent.”

For Brock, the real shift is not technological, but economic. “The current generation of AI models will become utilities rather than premium differentiators. This will be very similar to internet connections. The industry will shift to delivering the same or slightly better utility or output, but at a lower energy usage or lower cost.”

This commoditisation reframes AI investment entirely. Competitive advantage moves from model ownership to system integration, governance and operational maturity.

Quantum moves from science to infrastructure

Beyond classical AI, quantum computing is also entering a new phase. “Quantum computing is shifting from can we really do this to what can it unlock?” Patel says. “The race to quantum-safe infrastructure will intensify.”

Cisco is already working on quantum networking. “In 2026, Cisco engineers will continue working on a network built on the unique behaviour of quantum particles, to connect quantum computers and share information securely.”

This points toward a future where quantum becomes another layer of enterprise infrastructure rather than a niche research field. “A distributed, scalable quantum network could unlock a vast new computational space,” Patel adds. “By the late 2030s, this may culminate in a quantum internet.”

Data sovereignty becomes the real competitive moat

As AI models themselves become increasingly commoditised, Lenovo argues that the real battleground shifts decisively toward data ownership and governance. The competitive advantage no longer lies in who has access to the most advanced foundation model, but in who controls the data those models are allowed to reason over.

“By 2026, data sovereignty, knowing where data resides, how it is governed, and who controls its use, will become a top enterprise priority, regardless of whether organisations are deploying small language models or large language models in their generative or agentic AI workflows,” says Robert Daigle, Global AI Lead at Lenovo. “As foundation models become increasingly commoditised, the real differentiator will be data quality, uniqueness, and governance. Enterprises that fail to maintain ownership and integrity of their data risk eroding the very asset that fuels their AI advantage. Protecting and preserving data value will be essential for compliance and to maintain a competitive edge.”

This reframes AI strategy away from model procurement and toward long-term information stewardship. In practical terms, it elevates questions of data lineage, access control, jurisdiction, and retention from compliance checklists into board-level strategy. For many organisations, AI advantage will depend less on algorithmic brilliance than on whether their data foundations remain legally sovereign, technically accessible, and organisationally trusted.

Every watt becomes a strategic decision

Across EMEA, Lenovo sees energy overtaking compute as the dominant design constraint shaping AI deployment. The problem is no longer simply how much processing power is available, but whether the physical energy systems can sustain it.

“In 2026, energy will overtake compute as the primary design constraint for AI infrastructure across EMEA,” says Simone Larsson, EMEA Head of Enterprise AI at Lenovo ISG. “Every watt now matters. Europe’s grid systems remain under significant strain, while organisations are approaching ambitious sustainability commitments, forcing CIOs to treat energy not as an operational cost, but as a strategic limitation.”

This shift is already rewriting infrastructure planning. “Data-centre planning will begin with energy availability, efficiency, and location, not server density,” Larsson continues. “Power-aware design encompassing low-footprint systems, advanced cooling, and intelligent workload placement will become essential, particularly in secondary markets and edge locations with limited grid capacity.”

In this view, infrastructure geography becomes a competitive variable. Regions with renewable abundance, such as the Nordics, will attract AI investment, while Southern and Eastern Europe experiment with microgrids and hybrid generation. In the Middle East and Africa, on-site power generation moves from contingency to core strategy. AI leadership, in Lenovo’s framing, becomes inseparable from energy adaptability.

A sobering conclusion

Across all these perspectives, a consistent picture emerges. AI in 2026 is not defined by breakthroughs in intelligence, but by breakdowns in readiness. Organisations are discovering that autonomy exposes everything that was previously hidden: brittle infrastructure, unclear governance, fragmented data, outdated talent systems and fragile trust models.

The hype cycle is giving way to operational gravity. AI is no longer an experiment. It is becoming an organisational stress test. What survives will not be the smartest systems, but the most disciplined ones. The companies that thrive will be those that treat AI not as a feature, but as a structural transformation of how decisions are made, how power is distributed, and how accountability is enforced.

In that sense, 2026 may not be remembered as the year AI changed the world. It may be remembered as the year organisations finally realised what it would take to live with it.

Related Posts
Others have also viewed

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...

The AI hangover is real, and the hard work is only just starting

The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations ...