Artificial intelligence is moving into one of the insurance industry’s most persistent challenges: the detection of fraudulent claims. A new partnership between Aston University and Domestic & General aims to develop a system that can learn to spot fraud with a level of precision that existing approaches cannot deliver.
Fraudulent activity remains a costly problem. The Association of British Insurers recorded 72,600 dishonest claims in 2022, worth a combined £1.1 billion. While the majority of claims are genuine, the financial weight of fraud inflates costs across the sector and ultimately feeds through to premiums for customers. Domestic & General, which provides appliance insurance and repair services, believes artificial intelligence can shift the balance.
Learning to read complex signals
Current tools such as identity verification and fraud databases help, but they operate in silos. The partnership with Aston University’s Sir Peter Rigby Digital Futures Institute, supported by Innovate UK, will attempt to build a fully integrated model. The goal is to create an AI system able to examine the wide variety of information generated by insurance claims and assign a risk score for each case.
The research team plans to draw on recent advances in natural language processing to analyse both written and spoken communications, and on neural networks capable of extracting meaning from complex and incomplete data. By training models to recognise suspicious behaviours and patterns, the system could detect high-risk claims earlier and flag them for further investigation.
The scope extends to duplication, one of the most common tactics in fraud. By automatically checking customer and appliance details across multiple accounts, the system could identify repeat claims that might otherwise pass unnoticed. Bringing these elements together in a single platform, rather than as separate checks, would mark a significant shift in how insurers approach detection.
The role of explainable AI
Accuracy alone is not the only hurdle. For insurers to trust an automated system, transparency is essential. The Aston team will look to develop models that show why a claim has been flagged, rather than producing opaque outputs. This emphasis on explainability reflects a broader challenge facing AI across industries: powerful models must also be interpretable if they are to win acceptance in decision-making.
Professor Abdul Sadka, director of the Digital Futures Institute at Aston University, described the project as an opportunity to push the boundaries of AI in a live commercial environment. His team brings expertise in multimodal analytics, drawing together different types of data, while Domestic & General contributes operational knowledge of how fraud presents itself in practice.
AI as a tool for resilience
The partnership is structured as a Knowledge Transfer Partnership, a UK programme that links businesses with academic expertise. While the immediate aim is to protect against fraudulent claims, the longer-term ambition is to reduce claims costs across the sector. If successful, the savings could feed through to lower premiums for customers.
The challenge is substantial. Data arrives in multiple formats, from call recordings to email chains, and the system must be efficient enough to integrate into everyday operations. Yet the project highlights how artificial intelligence is being applied to entrenched industry problems, extending beyond consumer applications into the underlying mechanics of risk and resilience.
Fraud will not disappear, but if machines can learn to detect its traces faster and more accurately, the balance of advantage may start to shift. For an industry built on trust, that would be a significant gain.




