The intelligence problem AI has been avoiding

Share this article

Artificial intelligence has scaled faster than its own understanding of how intelligence forms. The result is a generation of systems that perform impressively but behave in ways their creators cannot reliably explain, control, or sustain.

Artificial intelligence has spent a decade convincing itself that intelligence is a function of scale, that if enough parameters are stacked, enough data consumed, and enough compute deployed, something resembling understanding will eventually emerge. That belief has delivered extraordinary results, but it has also embedded a structural assumption that is now beginning to fracture under scrutiny. Intelligence, as it appears in nature, does not arise from size alone, and the systems currently dominating AI bear only a superficial resemblance to the biological processes they claim to emulate.

This tension sits at the centre of research led by Professor Jesper N. Tegnér, Bioscience and Computer Science at King Abdullah University of Science and Technology. His work does not dispute that scaling works in a narrow performance sense. Large models do outperform smaller ones across a wide range of tasks. What it challenges is the idea that scaling is the mechanism through which intelligence itself emerges, because that distinction reframes the problem from one of capability to one of causality.

“There is a conviction that we can do it with scaling and optimisation, and that at some point something almost magical will happen,” Tegnér explains. “That the system will become intelligent. I think that is the most common misunderstanding.”

The consequences of that misunderstanding are becoming increasingly visible. Training large-scale models now demands vast energy resources, yet even with that investment, the resulting systems remain opaque, inconsistent, and often fragile when exposed to real-world conditions. These issues are frequently treated as temporary limitations, artefacts of a rapidly evolving field. Tegnér’s work suggests something more fundamental, that the architecture of these systems may be misaligned with how intelligence operates.

At its core, the research asks a question that the industry has largely avoided. If intelligence is not simply a function of scale, what determines how it emerges?

The architecture inside intelligence

Modern neural networks are typically treated as dense, highly connected systems where performance emerges from the optimisation of vast numbers of parameters. What Tegnér and his collaborators have done is step inside that assumption and examine the smallest structural patterns that underpin these networks. These patterns, known as network motifs, are simple arrangements of connections between a handful of nodes, but they recur across biological systems with striking consistency.

Their persistence is not incidental. From gene regulation in bacteria to signalling pathways in cells and neural circuits in the brain, these motifs appear as fundamental building blocks, suggesting that structure, not just scale, plays a defining role in how complex behaviour emerges.

The study analysed hundreds of thousands of these motifs and identified a critical distinction between two structural types. Coherent loops reinforce signals in a consistent direction, while incoherent loops combine activation and suppression within the same structure. At first glance, the latter appears counterintuitive, almost inefficient, because it resembles a system applying opposing forces simultaneously. “It is like driving a car where you accelerate and brake at the same time,” Tegnér says. “It does not seem sensible.”

What the research demonstrates, however, is that these incoherent structures are not inefficiencies but mechanisms of control. They provide greater numerical stability and a richer capacity to represent complex relationships, while coherent structures tend to concentrate on high-gradient regions of the optimisation landscape, making them more sensitive to noise and perturbation.

This distinction reshapes how learning unfolds inside a network. Systems dominated by coherent structures converge quickly but narrowly, locking onto specific optimisation paths and amplifying sensitivity to irregularities in the data. Systems incorporating incoherent motifs behave differently, maintaining broader exploration during training and resisting the influence of noise.

The difference is subtle at the level of individual components, but profound at the level of system behaviour. Two networks trained on the same data can diverge dramatically depending on how their internal connections are organised, one becoming brittle and reactive, the other stable and adaptable. The data has not changed. The architecture has.

Why scaling is not enough

The dominance of scaling has obscured this reality. Increasing model size improves performance, but it does so by compensating for inefficiencies rather than resolving them. It is an approach that works, but at a cost that is becoming increasingly difficult to justify.

“The brain consumes around 20 watts,” Tegnér notes. “If you compare that with large language models, you are looking at something like a million times more energy consumption for training and running these systems. That is not sustainable.”

The comparison is not simply about efficiency. It exposes a deeper divergence between artificial and biological intelligence. In the brain, structure is central to function. Different cell types, sparse connectivity, and layered organisation shape how information is processed, integrated, and retained. In artificial systems, by contrast, nodes are largely homogeneous, and intelligence is expected to emerge from the density of connections and the scale of optimisation.

“There are so many differences,” Tegnér explains. “In the brain you have thousands of different cell types. In AI systems, everything is more or less the same, and you just change the weights between them.”

That uniformity simplifies training, but it also limits interpretability and control. It contributes directly to the black box nature of modern AI, where systems can produce accurate outputs without offering any meaningful explanation of how those outputs were generated. In domains such as healthcare, where decisions must be justified and understood, that limitation becomes a barrier to adoption rather than a technical inconvenience. If structure determines behaviour, then ignoring structure constrains progress.

Learning to be wrong

One of the most revealing aspects of the research is how different architectures respond to noise. In controlled experiments, networks incorporating incoherent motifs maintained performance even when training data was deliberately distorted. Networks dominated by coherent structures struggled to distinguish between signal and noise, effectively learning both as part of the task.

This behaviour mirrors a broader issue in deployed AI systems. The phenomenon often described as hallucination is not random error, but the result of optimisation processes that have been shaped by noisy or incomplete data. Systems follow the gradients they are given, even when those gradients are misleading. “In real-world applications, you cannot have systems that are that sensitive,” Tegnér says. “You need systems that find the fundamental structure in the data, not systems that react to every irregularity.”

The implications are immediate for sectors such as healthcare, industrial operations, and autonomous systems, where data is rarely clean and decisions carry real consequences. Systems that cannot separate signal from noise do not simply degrade in performance, they become unreliable in ways that are difficult to predict or mitigate.

Architectural design offers a way to address this at source. By embedding structural mechanisms that promote stability, it becomes possible to build systems that are inherently more robust, rather than attempting to correct instability after it emerges.

A different path to progress

The idea that architectural design could become a primary driver of AI performance represents a significant shift in the field’s trajectory. For the past decade, progress has been measured through expansion, larger models, more data, greater computational power. Tegnér’s work points towards a phase where refinement becomes equally important. “I think the major impact will be if we can make systems that are much more energy efficient, but still perform well, and are more transparent,” he says. “That would change the economics of AI completely.”

This is not a rejection of scaling but a rebalancing of priorities. Scaling increases capacity, but structure determines how that capacity is used. Without structural innovation, further scaling risks diminishing returns, both in terms of performance and cost efficiency.

The challenge is that architectural innovation does not lend itself to the same industrialisation as scaling. It requires experimentation, interdisciplinary thinking, and a willingness to look beyond established engineering approaches. Biology becomes relevant not as a template, but as a source of principles that have been refined over evolutionary timescales. “We should be more humble,” Tegnér says. “Nature has solved these problems over millions of years. Why not try to understand how?”

The limits of engineering intuition

For engineers, this shift introduces both uncertainty and opportunity. The history of AI suggests that theoretical arguments alone rarely drive adoption. Neural networks themselves were once dismissed as impractical until empirical results forced a change in perspective. The same pattern is likely to apply to architectural approaches inspired by biological systems. “I think engineers are pragmatic,” Tegnér says. “They will adopt what works.”

The risk is that the current success of scaling delays that transition. When an approach delivers measurable results, there is little immediate incentive to question its underlying assumptions. Yet the growing constraints around energy consumption, reliability, and explainability suggest that those assumptions are becoming increasingly difficult to sustain.

Architectural design does not replace scaling, but it introduces a new axis of competition. Systems that are more efficient, more robust, and more interpretable will carry advantages that cannot be replicated simply by increasing model size.

Intelligence as structure

What this research ultimately challenges is the idea that intelligence can be engineered through accumulation alone. More data and more parameters can extend capability, but they do not necessarily create understanding. Intelligence, in both natural and artificial systems, appears to be a product of how information is structured and processed, not just how much of it exists.

That distinction alters the direction of the field. Progress becomes less about building larger systems and more about building better ones, systems where structure, diversity, and organisation play a defining role in how intelligence emerges.

The industry has spent a decade pushing the outer limits of scale. The next phase may depend on a more fundamental question, one that has been largely overlooked in the rush to expand. Not how big a model can become, but how it is built in the first place.

Related Posts
Others have also viewed

AI startups: When AI enters the clinic and meets real clinical constraint

Healthcare has become one of the most persuasive arenas for artificial intelligence, but also one ...

The battle for AI is shifting from models to the silicon beneath them

The next phase of artificial intelligence competition is increasingly being defined not by software alone, ...

The future of AI may depend on how machines learn from the real world

Artificial intelligence has advanced at extraordinary speed in the digital domain, but its transition into ...

Enterprise AI confronts its trust problem as analytics moves inside the model

Alteryx has introduced a new AI Insights Agent on the Google Cloud Marketplace, embedding governed ...