BSI calls for a contestability tool to enhance trust in AI

Share this article

In recent incidents, the potential harm of flawed AI systems became alarmingly clear. From erroneous customer support responses to misguided health advice, these missteps highlight the urgent need for accountability in artificial intelligence. This week, new research from the British Standards Institution (BSI) proposes a solution: a universal contestability tool, a mechanism that could allow users to flag issues within AI systems, fostering greater transparency and digital rights protection.

BSI’s findings emerge at a critical moment as AI permeates diverse sectors. Mark Thirlwell, Managing Director of AI Regulatory Services at BSI, explains the gravity of this evolution, stating, “AI holds transformative potential, but this must be grounded in user confidence that AI can be trusted to operate within ethical and legal guardrails.” Thirlwell argues that a standardised feedback tool, especially one embedded within AI systems, could address the lack of oversight currently plaguing AI technologies. According to the BSI report, a system for contesting AI outputs could mitigate the risks of AI-driven errors and offer a pathway for redress and improvement, ideally balancing accessibility, cost, and practicality.

Accountability in the AI supply chain

A primary challenge lies in the global, multi-tiered AI supply chain, where data is often controlled by various stakeholders, and individual companies may lack comprehensive visibility. As detailed in the BSI report, this fragmentation complicates the establishment of clear accountability channels. AI systems are typically deployed through partnerships spanning multiple countries, making it difficult for a single entity to be held responsible for any given malfunction. To counteract this, BSI advocates a shared responsibility model where all parties in the supply chain engage in continuous, transparent practices to address AI errors proactively.

A core theme of BSI’s research is the importance of transparency. Participants in BSI-led workshops identified the need for users to know their rights concerning AI’s use and potential risks. BSI’s contestability tool could act as a central interface, standardizing how complaints are handled, defining the responsibilities of AI providers, and clarifying who is liable if something goes wrong.

Building trust through a standardised feedback system

BSI’s research reveals overwhelming support for a uniform feedback mechanism. According to surveys, 62 percent of participants across the UK, India, Germany, and China advocate for a standardised system that allows users to raise concerns about AI behaviour. This would include “bias bounties,” where users receive compensation for identifying AI biases or faults. Thirlwell sees this as a significant shift towards democratising AI oversight: “A contestability tool does more than just gather feedback; it establishes a dialogue between AI providers and users, promoting a new standard of digital rights.”

To make this tool effective, the BSI recommends simplicity and accessibility. Contestation processes would use non-technical language, be adaptable to user needs, and operate with assurances of confidentiality to prevent retaliation against users who report issues. Additionally, BSI highlights that this feedback mechanism should be scalable, allowing it to adapt to evolving AI models and maintain responsiveness as issues arise.

Navigating the complexities of global standards

BSI’s advocacy for a universal contestability tool aligns with a larger ambition: the creation of globally recognized AI standards that adapt across borders. Yet, international AI governance faces steep obstacles, from differing regulatory standards to varying attitudes toward data privacy and ethics. For this reason, BSI suggests a soft law approach to encourage voluntary compliance while allowing local adaptations. This flexibility is vital as AI systems evolve rapidly, often outpacing the legislation intended to govern them.

By harmonising AI standards through initiatives like the contestability tool, BSI aims to foster a more cohesive global environment for AI deployment. The ultimate objective, according to the report, is to balance innovation with ethical responsibility, enabling AI to fulfil its potential as a positive force without sacrificing consumer trust or safety.

As AI becomes further embedded in daily life, BSI’s proposal underscores the urgent need to bridge the gap between technological advancement and user protection. A contestability tool, if widely adopted, could serve as a bulwark against the misuse of AI, ensuring that as this technology evolves, so does our capacity to hold it accountable. The promise of AI remains enormous but realizing it responsibly will require steadfast dedication to standards that uphold both innovation and trust.

Related Posts
Others have also viewed

Meta turns to custom silicon as agentic AI shifts the balance of compute

Meta has agreed to bring tens of millions of custom processor cores from Amazon Web ...

Autonomous systems move from ambition to infrastructure as enterprise AI takes control

A deepening partnership between ServiceNow and Google Cloud signals a shift in how artificial intelligence ...
Data Centre

Europe scales up AI factories as compute demand begins to outgrow traditional infrastructure

Nebius is planning a 310 MW AI facility in Lappeenranta, Finland, a development that reflects ...

Gigawatt scale AI infrastructure begins to redefine the limits of industrial development

Crusoe has announced plans to build a 900 megawatt AI data centre campus in Abilene, ...