The growing presence of artificial intelligence on smartphones is presenting a security challenge few consumers or businesses are prepared for. According to a new announcement from mobile cybersecurity firm Appdome, the same agentic AI assistants that help users navigate tasks, such as ChatGPT, Siri, Gemini and Copilot, are now capable of silently surveilling apps, exfiltrating data, and hijacking sessions in real time.
In response, Appdome has released dynamic defence plugins for Android and iOS designed to detect and block unauthorised AI assistants behaving like malware. These tools, the company claims, give mobile app developers and enterprises visibility into when and how AI assistants interact with apps, and allow them to enforce control before sensitive data is compromised.
While AI-enabled assistants promise productivity and convenience, the report argues that their growing autonomy is a double-edged sword. Whether assisting users or impersonating them, AI agents can access UI overlays, intercept transactions, and gather session data, often without triggering any traditional security alerts. On Android, these risks are heightened due to more permissive APIs. On iOS, threat vectors include screen mirroring exploits and unauthorised access to enterprise systems.
Good AI bad intentions
The core issue is not simply about bad actors building malicious AI tools. It is that even widely accepted, officially sanctioned assistants have capabilities that, in the wrong context, become liabilities. According to Appdome, the mobile environment cannot differentiate between legitimate and illegitimate actors. It only recognises what is allowed and what is not. a distinction that becomes murky when AI tools gain runtime access to data streams and user interfaces.
In enterprise environments, this opens the door to far-reaching consequences. A malicious AI assistant acting as an employee could leak documents, navigate internal systems, or trigger actions that compromise organisational security. In consumer apps, especially in banking, healthcare, or digital wallets, the risk includes data theft, impersonation, and the silent capture of credentials.
The threat landscape is widened further by unofficial or wrapped AI applications, which mimic popular tools like ChatGPT but are re-skinned and distributed by third parties. These clones often request excessive permissions, exfiltrate data to remote servers, and bypass standard app protections. Once embedded, they can observe everything from account activity to cryptographic tokens, enabling attackers to replay sessions or automate tampering using generative AI.
A new layer of mobile defence
Appdome’s new plugins are designed to recognise these behaviours using behavioural biometrics, detecting in real time when an AI agent begins interacting with an app. Developers can choose to monitor, restrict, or block specific actions based on dynamic policies. The solution also allows organisations to define a list of ‘trusted’ AI assistants, ensuring legitimate agents retain access while unverified or cloned apps are excluded.
The announcement reflects a growing consensus in cybersecurity that AI, while a powerful enabler, also introduces novel attack surfaces that traditional defences were not built to handle. As Appdome’s Chief Product Officer Chris Roeckl warns, a flood of agentic AI is inevitable, and not all of it will be benign.
What is clear is that the definition of malware is evolving. In a world where assistants think and act for users, malicious agents do not need to hack their way in—they simply need permission to help.