The battle over AI identity and who pays the price

Share this article

As AI adoption accelerates, regulatory uncertainty and cybersecurity vulnerabilities expose mistaken identity risks in large language models. As Mark Venables discovers, as companies struggle to balance compliance, economic viability, and ethical concerns, the battle over AI governance is just beginning.

Recent reports have raised alarm bells about the inability of leading AI models to comply with European regulations, particularly in cybersecurity resilience and discriminatory outputs. As policymakers tighten the noose around AI governance, companies face mounting uncertainty over remaining compliant while keeping their systems commercially viable. Dr Ilia Kolochenko, CEO of cybersecurity firm ImmuniWeb, believes that while the European Union’s AI Act aims to bring clarity, it introduces ambiguity that could stifle innovation.

“The EU AI Act is a significant obstacle, being one of the first major regulatory frameworks to govern the deployment of large language models in Europe,” he explains. “It includes some broad requirements, such as compliance with EU copyright law, but there’s little clarity on what that entails. It reminds me of the early uncertainty we saw with GDPR in 2018, where terms were high-level, like ‘reasonable’ or ‘adequate’ security controls, but lacked specificity. Fortunately, the European Data Protection Board later released guidelines that clarified GDPR’s Article 32, covering cybersecurity requirements. For the AI Act, the situation is similar. It’s a high-level piece of legislation that could trigger significant litigation.”

This lack of clarity has prompted hesitation among major AI vendors, particularly those in the United States. “Some companies, particularly from the US, hesitate to deploy large language models in Europe,” Kolochenko adds. “They want to see how these requirements will be enforced before investing. It’s a challenge to interpret, implement, and stay compliant with the AI Act, especially when frameworks like GDPR already impose obligations on data handling. GDPR’s data portability, deletion, and updating rights can be difficult to reconcile with how AI models are built. In a model trained on massive datasets, ensuring compliance with GDPR requirements like the right to be forgotten is technically challenging, if not impossible. This regulatory complexity leaves companies uncertain about how to move forward.”

The cybersecurity challenge

Beyond regulatory concerns, AI models face significant security risks that make compliance even more difficult. The most notable risk is poisoning the training data. If an adversary inserts malicious or misleading information during training, they could potentially influence the model’s output in harmful ways. For example, imagine an AI-powered chatbot that suggests dangerous actions or provides misinformation due to poisoned training data. In strategic industries like aerospace or defence, adversaries could use poisoned data to inject vulnerabilities or backdoors into software code, leaving the final products vulnerable.

AI vendors must establish stringent oversight mechanisms to safeguard their models against such risks. “This risk isn’t new; we’ve seen tactics similar to other technologies,” Kolochenko continues. “What’s different here is the scale and reach of AI. Now, organisations need robust verification processes and careful oversight to avoid relying on potentially compromised models. AI developers must be cautious with their data sources and consider the possibility of adversarial actors attempting to influence the model through corrupted data.”

Economic viability versus compliance

Ensuring compliance and security, however, comes at a price, one that many AI vendors may not be willing to pay. Many will likely aim for formal compliance, implementing policies and procedures and perhaps even appointing roles like Chief AI Ethics Officer. However, these measures will largely be superficial. The real issue is the economic viability of making effective and compliant AI models. Every major market – China, Russia, Latin America, and the Middle East – invests heavily in AI, each aiming to develop powerful models. This global competition fuels a race to deliver AI solutions quickly, but it also pressures companies to minimise costs wherever possible, often at the expense of deeper transparency and security measures.

A potential regulatory approach could involve additional levies on AI vendors. “The future may bring discussions about implementing a tax on AI platforms, similar to regulatory fees on certain industries,” Kolochenko says. “The competitive landscape is fierce, and some companies will eventually understand that AI models, especially large-scale generative ones, are not economically viable for every application due to their high operational costs. We’re already seeing environmental concerns around the energy consumption of large AI models. As models get larger, so do their CO2 emissions, which is another growing concern.”

Discrimination in AI: A red herring?

Concerns over AI-driven discrimination have also become a major regulatory focus, but Kolochenko suggests that the issue is often exaggerated. “Discriminatory outputs in AI stem primarily from biased training data, not from the AI itself,” Kolochenko explains. “AI models are neutral, like a hammer or a bicycle; they reflect the data on which they are trained. Take the example of a collection of CVs labelled ‘good’ or ‘bad.’ If the dataset reflects existing biases, such as a historical preference for male candidates, the model will replicate that bias in its evaluations. This is exactly what happened with Amazon in 2018 when their AI system disproportionately favoured male candidates simply because the existing workforce was mostly male.”

The challenge lies in obtaining high-quality, bias-free datasets. Addressing this bias requires access to high-quality, diverse data. However, gathering such data is expensive and logistically challenging. In practice, biased data means the model replicates societal biases, reinforcing existing inequalities. AI is essentially a mirror of our societal structure, if problematic elements exist in the data, the model will replicate them.

From a regulatory standpoint, Kolochenko argues that discrimination concerns often overshadow more pressing issues. “Discrimination is a serious issue, but it’s not exclusive to AI,” he continues. “Countless non-AI systems discriminate based on nationality, age, disability, race, or religion. Anti-discrimination laws exist, but discrimination is pervasive, even in manual processes and traditional software systems. Yet, AI seems to be getting unique attention, which raises the question of whether we’re focusing too narrowly on AI-specific discrimination while overlooking more fundamental issues.”

The anti-trust dilemma

Regulators must also address AI’s growing market concentration to prevent dominant players from stifling competition. “It’s crucial to prevent companies like Microsoft and Amazon from monopolising AI,” Kolochenko says. “Take LinkedIn, a treasure trove of valuable human insights with detailed analytics on user engagement and content quality. LinkedIn, owned by Microsoft, offers Microsoft unique access to this data, which could then be used to train proprietary models. Since Microsoft also owns Azure, they can combine data and compute resources in ways smaller startups simply cannot match.”

A possible solution could involve making proprietary data sets more widely accessible. “To maintain fair competition, regulators must ensure that dominant players do not unfairly exploit their resources,” Kolochenko continues. “One solution could be making LinkedIn’s data available for licensing so that other companies can access the same training resources. The risk is that Microsoft’s unique combination of assets could stifle innovation by smaller players who don’t have access to equivalent data or infrastructure.”

The road ahead

As AI adoption accelerates, companies must navigate an increasingly complex regulatory environment. “AI compliance and governance will be increasingly challenging as regulatory landscapes become more complex,” Kolochenko says. “Take the US as an example: over 20 states have enacted AI-related laws, primarily focused on anti-discrimination, child protection, and deepfake regulations. This is on top of state laws around data protection and cybersecurity. The result is a fragmented regulatory environment that is difficult for businesses to navigate. Companies may reach a point of ‘regulation fatigue,’ where they comply only with core requirements and choose to ignore less critical rules.”

For many, compliance will become a matter of risk management rather than full adherence. “Many companies will prioritise basic compliance, focusing on common-sense security and anti-discrimination practices,” Kolochenko concludes. “However, they may opt to risk penalties for the more complex regulations rather than invest heavily in full compliance. For many, it’s a calculated decision: paying a fine’s often cheaper than overhauling systems and procedures to meet each new requirement.”

Related Posts
Others have also viewed

The next frontier of start-up acceleration lies in the AI tech stack

The rise of generative and agentic AI has redefined what it means to start a ...

Quantum-centric supercomputing will redefine the AI stack

Executives building for the next decade face an awkward truth, the biggest AI breakthrough may ...

The invisible barrier that could decide the future of artificial intelligence

As AI workloads grow denser and data centres reach physical limits, the real bottleneck in ...
Into The madverse podcast

Episode 21: The Ethics Engine Inside AI

Philosopher-turned-AI leader Filippo explores why building AI that can work is not the same as ...