Navigating new risks in an AI-driven world

Share this article

AI is revolutionising enterprise operations, but it also exposes organisations to unprecedented cyber threats, from data poisoning to deepfake-driven scams, as Mark Venables explains. With attackers leveraging AI’s capabilities to outpace defences, businesses must adopt proactive strategies to secure their systems and maintain resilience.

Artificial intelligence has quickly become a cornerstone of modern business, promising unprecedented efficiencies and insights. Yet, as organisations increasingly rely on AI, they must contend with the growing risks that this technology introduces. Cybercriminals are weaponising AI, exploiting its vulnerabilities to create sophisticated, targeted, and often undetectable attacks.

Bharat Mistry, Director of Product Management at Trend Micro, highlights how these risks reshape the cybersecurity landscape: “AI-based attacks are highly sophisticated and focus on manipulating the system itself rather than just disrupting services,” he begins. “These attacks do not merely aim to cause chaos; they erode the trust that organisations and users place in their systems, with potentially catastrophic consequences.”

How AI is exploited

Integrating AI into critical business processes has expanded the attack surface, offering malicious actors new ways to exploit systems. Mistry explains one particularly insidious method: “Data poisoning involves feeding an AI system with ‘poisoned’ data to create biases or inaccuracies in its decision-making,” he says. “For example, in a healthcare setting, attackers could manipulate data to make the AI misinterpret symptoms, unnecessarily alarming patients or delaying critical diagnoses. In sectors like finance, this manipulation could lead to flawed credit risk assessments or even fraudulent approvals.”

Another risk is how large language models (LLMs) handle sensitive information. “If an organisation integrates data from HR, finance, marketing, and engineering into a single model, it is easy for one department to prompt the AI to inadvertently reveal private information from another,” Mistry adds. “LLMs do not inherently understand access controls; they treat all data equally, which makes them incredibly vulnerable to unintentional breaches or deliberate exploitation.”

Emerging techniques, such as embedding malicious instructions within otherwise benign text, also highlight the sophistication of modern attackers. “These techniques use invisible Unicode characters to hide malicious instructions in what appears normal text,” Mistry continues. “This is akin to digital stenography. For example, an attacker could insert hidden commands in a seemingly innocent email, prompting the AI to extract confidential data without the user even realising it.”

AI-driven social engineering

Social engineering attacks have become exponentially more dangerous with the advent of AI. Once easily identifiable by poor grammar and generic wording, phishing emails are now virtually indistinguishable from legitimate communication. Attackers can craft phishing emails with native-sounding language and contextual accuracy that can fool even seasoned professionals. They no longer rely on volume to succeed. Instead, they use AI to analyse public information about their targets, tailoring their messages to make them as convincing as possible.

Deepfake technology is adding another layer of complexity. Mistry recounts a recent example: “A finance worker in Hong Kong attended what he believed was a legitimate Zoom meeting with senior executives. “In reality, four participants were deepfake personas created by AI. The language, tone, and facial expressions were meticulously crafted to create a realistic interaction. These kinds of psychological nuances make AI-driven social engineering attacks particularly dangerous.”

Trend Micro’s research corroborates this trend, showing that attackers shift from broad campaigns to highly targeted operations. “What we are seeing is a move away from scattergun approaches to precision strikes,” Mistry adds. “They are not just targeting anyone; they are going after high-value individuals like executives, board members, or those with access to critical systems. These attacks are harder to detect and even harder to defend against.”

Using AI as a defence mechanism

While AI introduces new risks, it also provides unparalleled opportunities for strengthening cybersecurity. Mistry describes how Trend Micro leverages AI to counteract these emerging threats: “We use AI across our platform to detect and analyse data patterns in real-time,” he explains. By correlating alerts across email, network, and cloud environments, AI creates a comprehensive threat picture that allows us to respond faster and more effectively.”

One of the most transformative applications is its ability to assist human analysts. When an analyst encounters unfamiliar code, AI can analyse it and present its findings in clear, actionable terms. This is particularly valuable for junior analysts, as it bridges knowledge gaps and enables quicker response times. It is not just about detection; it is about empowering teams to act decisively.

“AI also plays a pivotal role in post-incident investigations,” Mistry continues. “After an attack, AI can map the intruder’s likely paths and predict which assets might be targeted next. This allows organisations to set up defensive choke points and prevent similar breaches from occurring in the future. It’s about staying ahead of the attacker, not just cleaning up after them.”

Overcoming governance challenges

Despite its potential, deploying AI securely comes with its own challenges. “One of the biggest issues we are seeing is the lack of governance,” Mistry says. “People are using unsanctioned AI tools in their workflows, what we call ‘bring your own AI.’ This creates significant risks because organisations often have no visibility into how these tools handle data or whether they comply with regulations like GDPR.

“It is essential to establish clear policies for which AI tools are permitted, how they are used, and how data is managed. Regular audits and response protocols should also be in place to ensure compliance and security. Without these safeguards, organisations risk exposing themselves to significant vulnerabilities.”

Looking ahead, Mistry anticipates the introduction of industry standards to regulate AI deployment. “We’re likely to see certification requirements similar to those for IoT devices,” he says. “These standards will help ensure that AI technologies meet minimum safety and security criteria, which will be crucial as AI becomes more integrated into regulated industries like healthcare and finance.”

Closing the skills gap

The growing complexity of AI-based threats has also highlighted a significant skills gap in the cybersecurity industry. Traditional expertise in network defence and endpoint protection is no longer enough. “We need professionals who understand the nuances of AI,” Mistry explains. “This includes skills in coding, software-defined environments, and adversarial attacks. The next generation of cybersecurity experts must be fluent in AI and traditional cyber defence.”

Agentic AI, where multiple AI agents collaborate autonomously, represents the next frontier. While promising, this technology introduces additional challenges. “Agentic AI has the potential to achieve accuracy levels of up to 95 or 96 per cent,” Mistry notes. “But it also demands a level of expertise that many organisations lack. Bridging this gap will require significant investment in training and upskilling.”

Communication is another critical factor. CISOs need to move beyond technical jargon and start speaking the boardroom language. Metrics like the number of blocked emails do not resonate with executives. Instead, they need to frame cybersecurity as a business enabler, focusing on business continuity and risk reduction metrics.

A proactive path forward

Organisations must adopt a proactive approach to cybersecurity to stay ahead of evolving threats. This begins with embedding security into every stage of AI development. “Security cannot be an afterthought,” says Mistry. “It needs to be integrated from the start, with measures like robust data governance, stress-testing models against adversarial attacks, and continuous monitoring.”

Fostering a culture of awareness is equally important. “Cybersecurity is not just the responsibility of the IT department; it is everyone’s job,” Mistry says. “Human error remains a leading cause of breaches, so targeted training and awareness programmes are critical. Employees need to understand the risks, from deepfakes to unsanctioned AI tools, and how to mitigate them.”

As the cyber threat landscape continues to evolve, Mistry remains optimistic about AI’s role in building resilience. “We’re only scratching the surface of what AI can do,” he concludes. With the right strategies, it can serve as a powerful tool to pre-empt threats, streamline operations, and drive innovation. The key is to strike a balance, leveraging AI’s capabilities while mitigating risks.”

Related Posts
Others have also viewed

Platform disruption is coming faster than your next upgrade

AI is redefining how enterprises design, deploy, and interact with software, leaving traditional digital experience ...

Growth of AI will push data centre power demand thirtyfold by 2035

Artificial intelligence is poised to become the dominant force shaping the future of global infrastructure, ...

AI needs quantum more than you think

As AI models grow larger and classical infrastructure reaches its limits, the spotlight is turning ...

Edge AI is transforming industrial intelligence faster than most executives realise

Manufacturers are rapidly turning to edge AI to unify data, accelerate decision-making, and eliminate silos ...