Europe’s confidence in regulating AI is masking a widening security gap

Share this article

Europe has moved faster than most regions to define how artificial intelligence should be governed. The EU AI Act has set a global benchmark for responsible deployment, accountability and transparency. Yet new research suggests that while Europe may be regulating AI, it is struggling to secure it, leaving organisations exposed as AI systems become more deeply embedded in business operations and critical infrastructure.

That is the central finding of a new forecast report from Kiteworks, which warns that European organisations are falling behind global peers on the security controls needed to detect AI-specific threats, respond to AI-enabled breaches and govern AI data flows. As AI expands the digital attack surface, the report argues, Europe’s emphasis on policy has not been matched by equivalent investment in operational security.

Based on a survey of security, IT, compliance and risk leaders across ten industries and eight regions, the Data Security and Compliance Risk 2026 Forecast Report highlights consistent underperformance across key AI security metrics in France, Germany and the UK. These gaps, the report suggests, are no longer abstract compliance risks but tangible security exposures.

Regulation is advancing faster than detection

One of the clearest indicators of the problem is anomaly detection for AI systems, the ability to identify when models behave unexpectedly or outside their intended scope. According to the report, only 32 per cent of French organisations, 35 per cent of German organisations and 37 per cent of UK organisations have this capability in place, compared with a 40 per cent global benchmark.

That difference may appear modest on paper, but its implications are significant. When AI systems access data they should not, produce outputs that suggest compromise, or are manipulated through adversarial inputs, organisations without anomaly detection simply do not see the threat unfolding. In such cases, breaches are discovered late, if at all, amplifying regulatory exposure, reputational damage and the loss of sensitive data.

Wouter Klinkhamer, general manager of EMEA strategy and operations at Kiteworks, framed the issue as a disconnect between intent and execution. He said Europe has led the world on AI governance frameworks, but governance without security is incomplete. When AI models behave anomalously, European organisations are less equipped than their global counterparts to detect it. In his view, that is not a compliance gap but a security gap.

Why incident response breaks down

The report also points to weaknesses in AI incident response, particularly around training-data recovery, the ability to examine what an AI model learned from in order to diagnose failures or prove what went wrong. Across Europe, adoption of this capability sits between 40 and 45 per cent, below the global average of 47 per cent and well behind Australia at 57 per cent.

Without training-data recovery, organisations lack the forensic tools needed to investigate AI incidents or demonstrate compliance to regulators after the fact. As AI systems become more autonomous and influential in decision-making, that lack of visibility makes both technical remediation and regulatory accountability harder to achieve.

Supply chain visibility presents another blind spot. Only 20 to 25 per cent of European organisations have adopted software bills of materials for AI components, compared with more than 45 per cent in leading regions. This means many organisations cannot see which third-party libraries, datasets or frameworks underpin their AI systems.

As attackers increasingly target vulnerabilities in shared AI components, this lack of visibility creates systemic risk. Organisations cannot trace the origin of a compromise, assess exposure or respond quickly, allowing attacks to propagate silently through interconnected systems.

Third-party risk and manual governance

The report also highlights weaknesses in how European organisations manage third-party AI risk. Only four per cent of French organisations and nine per cent of UK organisations have joint incident response playbooks with their AI vendors. When a vendor’s AI system is compromised, the absence of shared detection mechanisms and response protocols means breaches can spread across organisational boundaries before anyone realises there is a problem.

At the same time, AI governance processes remain heavily manual. Many organisations rely on continuous but labour-intensive compliance documentation rather than automated evidence generation. This creates a dual exposure. Regulators assessing fines may encounter incomplete or inconsistent records, while insurers evaluating breach claims may deny coverage if organisations cannot demonstrate that appropriate AI controls were in place.

The result is what Kiteworks describes as a governance payout gap. Compliance exists on paper, but when incidents occur, organisations struggle to prove they were operating responsibly.

From compliance risk to attack surface

Taken together, the findings paint a picture of AI systems that are increasingly powerful but insufficiently defended. AI models process sensitive data, integrate with core systems and make autonomous decisions. Every model that cannot be monitored for anomalies becomes a blind spot. Every third-party component that cannot be tracked becomes an inherited vulnerability. Every vendor relationship without a coordinated response plan becomes a breach waiting to spread.

The report argues that these are not governance failures waiting for regulatory audits, but attack surfaces waiting for adversaries. Compliance gaps carry the risk of fines. Security gaps carry the certainty of compromise, data exfiltration and operational disruption.

Kiteworks’ analysis identifies unified audit trails and training-data recovery as keystone capabilities that correlate with stronger performance across all other security metrics. Organisations that have implemented them show measurable advantages in both compliance readiness and resilience.

As Europe pushes forward with the enforcement of the EU AI Act, the message from the report is stark. Defining responsible AI is only half the task. Securing the systems that bring AI into production is the harder part. By the end of 2026, the organisations that close the gap between AI policy and AI security will be positioned to operate with confidence. Those that do not may discover their weaknesses not from regulators, but from attackers.

Related Posts
Others have also viewed

Why edge computing forces a rethink of cooling strategy

Matt Thompson, Managing Director, Airsys, argues that edge computing is exposing the limits of cooling ...

AI is being positioned as the missing link in fixing Britain’s transport system

Artificial intelligence is being cast as a central pillar of Britain’s efforts to modernise its ...

Europe’s confidence in regulating AI is masking a widening security gap

Europe has moved faster than most regions to define how artificial intelligence should be governed. ...

Data centres are being redesigned for a more brutal compute reality

Artificial intelligence is no longer stretching data centre infrastructure at the margins. It is reshaping ...