New ICO guidance on AI and data protection

Share this article

Data Protection

The UK Information Commissioner’s Office has updated the Guidance on Artificial Intelligence and Data Protection.

The changes respond to requests from UK industry to clarify requirements for fairness in AI that will help organisations adopt new technologies while protecting people and vulnerable groups.  

The ICOs AI Guidance on AI and Data Protection reduces the burden of compliance for organisations and reflects upcoming changes in relation to AI regulation and data protection.

The guidance was updated on 15 March 2023after requests from UK industry to clarify requirements for fairness in AI. The update supports the UK government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI.

Acknowledging the fast pace of technological development the ICO believes more updates will be required in the future so using data protection’s principles as the core of this expanding work makes editorial and operational sense.

Outlined below is the new content so past readers of the Guidance on AI and data Protection can navigate the changes at speed.

What are the accountability and governance implications of AI?

What you need to know: New content on things to consider as part of your DPIA

How do we ensure transparency in AI?

Change overview: This is a new chapter with new content. The ICO has created a standalone chapter with new high-level content on the transparency principle as it applies to AI. The main guidance on transparency and explainability resides within our existing Explaining Decisions Made with AI product.

How do we ensure lawfulness in AI?

Change overview: This is a new chapter with old content – moved from the previous chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’ and two added new sections.

What you need to know: New content added on AI and inferencesaffinity groups and special category data

What do we need to know about accuracy and statistical accuracy?

Change overview: This is new chapter with old content.

What you need to know: Following the restructuring under the data protection principles, the statistical accuracy content – that used to reside with the chapter ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’ – has moved into a new chapter that will focus on the accuracy principle. Statistical accuracy continues to remain key for fairness but we felt it was more appropriate to host it under a chapter that focuses on the accuracy principle.

Fairness in AI

Change overview: This is a new chapter with new and old content. What you need to know: The old content was extracted from the former chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’. The new content includes information on: Data protection’s approach to fairness, how it applies to AI and a non-exhaustive list of legal provisions to consider; the difference between fairness, algorithmic fairness, bias and discrimination.

High level considerations when thinking about evaluating fairness and inherent trade-offs. Processing personal data for bias mitigation; Technical approaches to mitigate algorithmic bias; How are solely automated decision-making and relevant safeguards linked to fairness, and key questions to ask when considering Article 22 of the UK GDPR.

Fairness in the AI lifecycle

Change overview: This is a new chapter with new content. This section is about data protection fairness considerations across the AI lifecycle, from problem formulation to decommissioning. It sets outs why fundamental aspects of building AI such as underlying assumptions, abstractions used to model a problem, the selection of target variables or the tendency to over-rely on quantifiable proxies may have an impact on fairness. This chapter also explains the different sources of bias that can lead to unfairness and possible mitigation measures.

Related Posts
Others have also viewed
Into The madverse podcast

Episode 27: Why trusting one AI model is the biggest risk

In this episode of Into The Madverse, Mark Venables speaks with Matt Penton, Director of ...

Identity is becoming the weakest link as autonomous systems spread

As artificial intelligence moves from experimentation into everyday enterprise operations, a familiar security assumption is ...

Why streaming data is becoming the weakest link

As artificial intelligence becomes embedded across business operations, organisations are discovering that the hardest part ...

Schneider Electric reshapes its data centre leadership for the AI era

Britain’s data centre sector is entering a decisive phase. Artificial intelligence is driving unprecedented demand ...