LawTeacher.com

Data protection and AI: UK GDPR and the law on automated decision-making

April 28, 2025

Engineer working on automated decision-making technology on tablet pc in office, next to a robot

Artificial Intelligence (AI) systems are increasingly used to process personal data and make decisions about individuals. In the United Kingdom, data protection law – notably the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 (DPA 2018) – regulates how AI may be deployed, especially when automated processing is used to make decisions affecting individuals. This essay examines how UK data protection laws govern AI-driven processing of personal data and automated decision-making. It focuses on the UK GDPR rules (such as the restrictions on solely automated decisions and profiling) and the role of the DPA 2018, analysing the challenges of applying traditional privacy principles to modern AI technologies. The discussion covers the current legal framework and proposed reforms, such as those in the Data Protection and Digital Information Bill, with a view to evaluating whether the law effectively addresses the risks of AI-based decision-making while enabling innovation.

Background

Defining AI and automated decision-making: Artificial Intelligence (AI) generally refers to computer systems capable of performing tasks that typically require human intelligence, such as learning and decision-making. In the context of data protection, a particularly relevant subset of AI is automated decision-making – where decisions are made about individuals by technological means with little or no human involvement. The UK GDPR defines profiling as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements” (UK GDPR, Article 4(4)). In simpler terms, profiling is using algorithms to analyse personal information in order to classify or predict traits of an individual. Automated decision-making and profiling often go hand-in-hand in AI systems – for example, an algorithm might profile individuals by credit risk and then automatically decide whether to grant a loan.

UK data protection law context: The UK’s core data protection framework is set out in the UK GDPR and the DPA 2018. The UK GDPR is the retained EU General Data Protection Regulation (EU Regulation 2016/679) in domestic law, which continues to apply in the UK after Brexit with some technical adjustments. The DPA 2018 supplements the UK GDPR by filling in specific details and providing rules for areas not covered by the GDPR (such as law enforcement data processing and intelligence services). Together, these laws aim to protect the rights and freedoms of individuals (data subjects) with regard to personal data processing, while allowing for lawful and fair use of data.

Relevance to AI: AI systems frequently rely on vast amounts of personal data – for instance, an AI might ingest personal profiles, transaction histories or images of individuals to “learn” patterns. Decisions made by AI can have significant impacts on individuals: for example, an AI might autonomously decide whether someone is eligible for a job interview, what insurance premium they should pay, or whether their exam results should be adjusted. Data protection law becomes crucial in these scenarios. It provides a legal framework to ensure such processing is lawful, transparent, fair, and subject to accountability. The use of AI does not exist in a legal vacuum; when personal data are involved, the same privacy principles apply, although implementing them can be challenging. Indeed, regulators and courts have already dealt with cases of AI or algorithmic tools that potentially infringed data protection or related rights. A notable example is Bridges v South Wales Police (2020), where the Court of Appeal examined police use of live facial recognition technology. While facial recognition is not an automated decision in itself (since human officers intervened to apprehend matches), the case underscored the importance of data protection compliance (e.g. conducting a Data Protection Impact Assessment and ensuring a lawful basis) when deploying AI-driven surveillance (R (Bridges) v South Wales Police [2020] EWCA Civ 1058). This and other incidents, such as the 2020 UK “exam results algorithm” controversy, highlight the public concern that AI systems may produce unfair or opaque outcomes. In the exam results situation, an algorithmic model was used to moderate students’ grades, leading to widespread complaints of injustice. Questions were raised as to whether this constituted unlawful solely automated decision-making under data protection law, although the authorities argued there was human input in setting and reviewing grades. These contexts form the backdrop against which the UK GDPR’s specific provisions on automated decision-making and profiling must be understood.

Legal Framework

UK GDPR: Principles and Rights Relevant to AI

The UK GDPR is built on fundamental data protection principles that apply to all processing of personal data, including processing by AI systems. Several of these principles are particularly significant for AI:

  • Lawfulness, fairness and transparency (Article 5(1)(a) UK GDPR): Personal data must be processed lawfully, fairly, and in a transparent manner. For AI, this means organisations must have a valid legal basis for processing (e.g. consent, contract, legitimate interests, etc.), must not process data in ways that unjustly harm individuals, and must be clear about how they use personal data. Fairness implies that outcomes of automated processing should not be discriminatory or unjust. Transparency is challenging for AI, because it requires informing individuals about how their data is used and, when automated decisions are made, providing meaningful information about the logic involved. Under Articles 13 and 14 UK GDPR (the right to be informed), controllers must inform data subjects if they will be subject to automated decision-making including profiling, and provide information about the processing, significance and envisaged consequences. These transparency obligations directly tackle the “black box” nature of many AI algorithms by requiring some explanation to those affected.
  • Purpose limitation (Article 5(1)(b)): Personal data collected for one purpose should not be reused for incompatible purposes. AI systems often repurpose data (for example, using social media data to infer creditworthiness). Controllers must ensure that any new use is compatible with the original purpose or else seek fresh consent or another lawful basis. This principle can constrain AI developers from freely exploiting personal data in ways the individuals did not expect.
  • Data minimisation (Article 5(1)(c)): Only the data that is necessary for a given purpose should be processed. AI development sometimes thrives on large, diverse datasets – but controllers should not collect or retain more personal data than needed for the specific AI decision-making purpose. For instance, an AI used to decide insurance eligibility should avoid using irrelevant personal data. This principle pushes against the “bigger is better” mindset in AI data collection.
  • Accuracy (Article 5(1)(d)): Personal data must be accurate and kept up to date. Decisions based on incorrect data can be flawed and harmful. In AI profiling, if the underlying data or the algorithm’s inferences are inaccurate, individuals have rights to rectification (Article 16 UK GDPR) of their personal data. Ensuring accuracy is challenging when AI makes probabilistic inferences – for example, predicting traits that may not be verifiable. Nonetheless, if an AI system profiles someone incorrectly (say, falsely flags an individual as a fraud risk), the controller is responsible for correcting or deleting that data to prevent unfair decisions.
  • Storage limitation (Article 5(1)(e)): Personal data should not be kept for longer than necessary. AI systems that continuously learn from data may tempt organisations to hoard data indefinitely, but they must still justify retention periods or anonymise data when feasible.
  • Integrity and confidentiality (security) (Article 5(1)(f)): AI systems processing personal data must implement appropriate security to prevent breaches. This is crucial given that AI datasets can be large and attractive targets, and any data breach could have serious consequences if it involves sensitive inferences about individuals.

Beyond these principles, the UK GDPR provides data subject rights that empower individuals in the context of automated processing. The most specific provision is Article 22 UK GDPR, which deals explicitly with automated decision-making and profiling that has legal or similarly significant effects on individuals. Article 22(1) establishes that as a rule, individuals have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them (UK GDPR, Article 22(1)). In other words, purely automated decisions that have a significant impact on a person are generally prohibited unless additional conditions are met. A “decision producing legal effects or similarly significant effects” would include, for example, an algorithmic decision to refuse someone a job, grant or deny a loan, or determine educational grades – since these have substantial impact on one’s rights, opportunities or status.

Article 22(2) then sets out exhaustive exceptions where such automated decisions are permitted. The three exceptions are where the decision is: (a) necessary for entering into or performing a contract with the data subject; (b) authorised by law (to which the controller is subject) that also lays down suitable safeguards for the data subject’s rights and interests; or (c) based on the data subject’s explicit consent (UK GDPR, Article 22(2)). If an automated decision falls under one of these categories, it is lawful despite the general prohibition in Article 22(1) – but importantly, even in these cases, the GDPR mandates protections. Article 22(3) requires that, for decisions under the contract or consent exceptions, the data controller must implement “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” This means if, for example, a bank relies on an automated credit scoring system to refuse a loan (arguably necessary for a contract), the individual should be informed and given an opportunity to have a human review the decision or to contest it. The GDPR ensures that humans are not completely sidelined when algorithms make important decisions about individuals.

In the case of the law-authorised decisions exception (Art 22(2)(b)), the UK has chosen to lay down specific safeguards through domestic law. The Data Protection Act 2018 provides that safeguard. Section 14 of the DPA 2018 deals with “Automated decision-making authorised by law: safeguards.” Under DPA 2018 s.14, when a controller makes a qualifying significant decision about a data subject that is based solely on automated processing and that decision is required or authorised by law, the controller must: (i) notify the data subject that the decision has been made using solely automated processing as soon as reasonably practicable, and (ii) give the data subject an opportunity within a specified period to request a reconsideration or a new decision involving human intervention. If the data subject makes such a request, the controller must have a human review the decision (or take a new decision not based solely on automation) and inform the data subject of the outcome. These provisions effectively mirror the safeguard of human intervention mentioned in Article 22(3), but they embed it into UK law for cases where an Act of Parliament or other law allows automated decisions. In essence, whether an automated decision is justified by contract, consent or law, UK data protection law seeks to ensure individuals can challenge the result and have a person examine their case.

An additional layer in Article 22 is paragraph 4, which addresses special category data (sensitive personal data, such as data revealing racial or ethnic origin, political opinions, health, etc.). It prohibits making automated decisions under the exceptions if those decisions are based on special category data, unless the individual has given explicit consent or the processing is necessary for substantial public interest under law, and even then, appropriate safeguards must be in place (UK GDPR, Article 22(4)). This is especially relevant for AI systems, because if an AI were to use sensitive attributes (explicitly or implicitly) in automated decisions – for instance, an algorithm that takes account of health data or infers race – it faces stricter conditions. Controllers generally must avoid using sensitive data in automated decisions unless they have a clear legal justification and protections to prevent discrimination or other harms.

Beyond Article 22, several other provisions of the UK GDPR are pertinent to AI and profiling:

  • Right to information and access: Individuals have the right to be informed if personal data is used to make automated decisions about them (Articles 13 and 14). They also have a right to access their data and obtain information about automated processing, including “meaningful information about the logic involved” (Article 15). This means, for example, a person can ask a company to explain in general terms how an AI algorithm profiles them or makes decisions – a challenging requirement for complex AI, but part of the law’s transparency objective.
  • Right to rectification (Article 16) and erasure (Article 17): If an AI system’s output is based on incorrect personal data, the individual can demand correction. They may also, in some cases, request deletion of personal data – though erasing data that has been used to train AI models can be complicated in practice (the law provides some exemptions, for instance, if data must be retained for legal reasons or if it has been effectively anonymised).
  • Right to object (Article 21): Individuals have the right to object to certain processing, including profiling. In particular, Article 21(2) gives an unconditional right to object to processing of personal data for direct marketing, including any related profiling. For other types of profiling or automated processing under legitimate interests, an individual can object, and the controller must cease processing unless it demonstrates compelling legitimate grounds. This right empowers people to say “I don’t want to be profiled by your AI” in some contexts (notably marketing).
  • Data Protection Impact Assessments (DPIAs) (Article 35): Controllers must perform a DPIA before carrying out any processing that is likely to result in a high risk to individuals’ rights. The UK GDPR and regulatory guidance explicitly recognise that profiling and automated decision-making can be high-risk activities – especially if they involve systematic and extensive evaluation of personal aspects or processing sensitive data. In practice, deploying a new AI decision-making system (for instance, a tool to automatically assess job applicants) typically triggers the requirement to conduct a DPIA. The DPIA process forces the controller to assess the necessity and proportionality of the processing, and to identify and mitigate risks to privacy and other rights. The importance of DPIAs was underscored in the Bridges case mentioned earlier – the Court of Appeal found that the police had not adequately complied with the DPIA obligations under DPA 2018 in trialing facial recognition, contributing to the finding that the use of that AI technology was unlawful (R (Bridges) v South Wales Police [2020] EWCA Civ 1058).

In summary, the UK GDPR currently provides a comprehensive framework that in principle covers AI and automated decision-making: it sets general standards (principles of fairness, etc.), specific prohibitions and conditions for automated decisions (Article 22), and gives individuals rights to know about, object to, and challenge algorithmic decisions. The Information Commissioner’s Office (ICO), as the UK’s data protection regulator, is empowered under the DPA 2018 to enforce these rules – including investigating AI-based processing and issuing fines or orders for non-compliance.

Data Protection Act 2018: Role and Provisions

The DPA 2018 plays a supporting role to the UK GDPR in the context of AI. As noted, Section 14 DPA 2018 provides safeguards for automated decisions authorised by law, ensuring the individual’s right to human review. The DPA 2018 also contains various exemptions and clarifications that can affect AI processing. For example, there are certain exemptions in Schedule 2 of the DPA 2018 (such as for research, or for management forecasts) that, if applicable, might affect how data subject rights operate in an AI context. Generally, however, those exemptions are narrow and would not permit unrestricted automated decision-making.

Importantly, the DPA 2018 has distinct parts for specific sectors: Part 3 covers data processing by law enforcement authorities, and Part 4 covers processing by intelligence services. These parts include their own provisions on automated decision-making. For instance, in the law enforcement context, Section 49 DPA 2018 provides that a law enforcement controller (like the police) “may not take a significant decision based solely on automated processing” unless it is authorised by law, and even then Section 50 requires safeguards similar to those in the GDPR (e.g. the right to have the decision reconsidered with human involvement). Likewise, for the intelligence services, Section 96 DPA 2018 gives individuals the right not to be subject to solely automated decisions that significantly affect them. The parity of these rules across general, law enforcement, and intelligence contexts shows a clear legislative intent: across the board, important decisions about people should not be left entirely to machines without oversight or accountability.

Another aspect of the DPA 2018 is that it empowers the ICO to create codes of practice and guidance. While not binding law, such guidance (for example, the ICO’s guidance on AI and Data Protection, and on Explaining AI decisions) is influential in interpreting how controllers can practically comply. For example, the ICO (in collaboration with the Alan Turing Institute) has issued detailed guidance on AI, addressing issues like transparency and fairness in AI systems. Organisations deploying AI are expected to heed this guidance to meet their accountability obligations under Article 5(2) UK GDPR and Section 34 DPA 2018 (which requires appropriate documentation and policies).

Summary of current framework: Under current UK law, any organisation using AI to process personal data and make decisions must ensure compliance with the UK GDPR’s principles, identify a valid legal basis under Article 6 (and Article 9 if special data are involved), and determine if Article 22’s restriction on solely automated significant decisions applies. If it does, they must either avoid purely automated decisions or fit within an exception and implement the required safeguards (like human review). They must also be prepared to uphold individuals’ rights (providing information, access, etc.) and likely need to carry out DPIAs to assess risks beforehand. Non-compliance can lead to regulatory enforcement, including potentially large fines or orders to stop processing. This framework is meant to protect individuals from the potential harms of unchecked AI decision-making – such as secret profiling, unjustified algorithms, or irrevocable automated judgments – by embedding a degree of human oversight and accountability.

Application to AI in Practice

Translating the legal framework into practice is not straightforward. AI systems and automated decision-making are deployed in diverse sectors, and real-world scenarios illustrate both the effectiveness and the limits of data protection law in governing AI.

Automated credit scoring and finance: Consider a bank that uses an AI algorithm to decide whether to grant loans or credit cards. This is a prototypical case of automated decision-making with significant effects (approval or denial of credit). Under UK GDPR Article 22, the applicant has a right not to be subject to a purely automated decision in such cases unless an exception applies. In practice, banks often argue that such processing is “necessary for entering into a contract” with the applicant (an Article 22(2)(a) exception) – essentially claiming they couldn’t practically offer online credit decisions without automation. If that exception is relied upon, the bank must provide safeguards. Typically, lenders inform applicants that an automated score or profiling was used and offer the chance to request human review of any rejection. Indeed, many financial services include a mechanism to appeal or query a negative automated outcome, as required by the law. The UK also has sectoral rules (e.g. credit reference agencies and the Financial Conduct Authority’s guidelines) that complement data protection duties, reinforcing fairness and transparency in algorithms used for creditworthiness. Nonetheless, challenges remain: AI credit models might use complex machine learning that even the bank’s staff cannot easily interpret, making it hard to provide a “meaningful explanation” to a rejected applicant of why the algorithm said no. This raises the question of how effective the Article 22 safeguard of human intervention is, if the human reviewers themselves are reliant on an inscrutable model’s output. It also tests the fairness principle – for instance, if an AI inadvertently uses proxies for protected characteristics (like postcode as a proxy for ethnicity) leading to biased credit outcomes, it may breach the fairness and non-discrimination ethos of data protection and equality law. Regulators can and have intervened; there have been enforcement actions in Europe under GDPR against financial institutions whose automated profiling was found discriminatory or non-transparent, illustrating that the legal standards are not merely theoretical.

Employment and recruitment: Employers are increasingly using AI tools to assist in hiring decisions – such as automated CV screening, aptitude test scoring, or even video interview analysis powered by algorithms. If an employer were to rely solely on an AI system to reject candidates without human involvement, this would likely trigger Article 22. Most UK employers avoid this by keeping a “human in the loop” – e.g. using AI to shortlist candidates but having a human make the final hiring decision. By doing so, they position the decision as not “solely” automated. However, this approach can be problematic if the human oversight is perfunctory. Data protection authorities (and commentators) have stressed that meaningful human involvement is required to take a decision outside Article 22’s scope – a human reviewer must genuinely assess and, if appropriate, override the AI’s recommendation, rather than just rubber-stamping it. If that doesn’t happen, the decision might effectively be automated despite nominal human presence. In the UK, as of now, there isn’t a reported court case of an applicant directly challenging an employer for an automated rejection, but one could envisage claims either under data protection law or equality law if an AI hiring tool systematically biases against certain groups. The Equality Act 2010 would also come into play if the AI’s decisions are discriminatory (even indirectly), while the UK GDPR would support a claimant’s access to data and logic to help uncover such bias.

Public sector and administrative decisions: Government agencies have experimented with AI for things like welfare benefit assessments, visa application processing, and fraud detection. Data protection laws fully apply to these uses. For example, the UK Home Office’s cancelled “visa streaming” algorithm (used to triage visa applications for risk) prompted concerns that it profiled applicants by nationality in a discriminatory manner. Although it was withdrawn under public pressure, such a system would have raised Article 22 issues if it had an autonomous role in deciding outcomes of applications. The Home Office maintained that the tool was just assisting officers, not making final decisions, highlighting again the human-in-the-loop workaround. In welfare and taxation contexts, other countries have seen litigation: a Dutch court in 2020 struck down an automated fraud detection system (SyRI) as violating human rights and data protection principles due to its intrusive profiling of welfare recipients without sufficient transparency or fairness. While that was under human rights law primarily, it resonates with GDPR principles. In the UK, the DPA 2018 Part 3 (law enforcement) has been used to question automated policing tools. The Bridges case dealing with facial recognition by police, though not an Article 22 scenario, demonstrated practical application of data protection requirements: the police were expected to conduct a rigorous DPIA and ensure their use of AI was strictly necessary and proportionate under law. The failure to properly account for things like the risk of algorithmic bias (facial recognition systems are known to perform unevenly across different ethnicities) was one factor that rendered the deployment unlawful. This indicates that before deploying AI, even public bodies must proactively address risks like accuracy and bias to comply with the law’s fairness and accountability requirements.

Another high-profile incident was the Ofqual exam grading algorithm in 2020. When exams were cancelled due to COVID-19, Ofqual (the regulator for exams in England) used an algorithm to standardise teacher-assessed grades. Many students saw their grades downgraded based on factors like their school’s past performance. This led to public outcry and was eventually reversed. From a data protection perspective, the situation raised the question: was this a decision “based solely on automated processing” that significantly affected students? Ofqual contended it was not solely automated because teachers provided initial grades and there was an appeals process (human elements). However, observers pointed out that the crucial standardisation step was automated and had a significant effect on individual results – which arguably brought it within Article 22’s remit. The ICO noted that GDPR “places strict restrictions” on solely automated decisions with significant effects and that processing must still be fair in any case (even if not solely automated). Ultimately, no litigation under GDPR occurred because the policy was abandoned, but the episode served as a lesson. It underscored that using an algorithm for high-stakes decisions must be done with extreme caution and transparency. Any future similar system would likely need to be designed with a clear legal basis, offer students an easy way to have a human review (a safeguard), and demonstrate that it does not unlawfully discriminate or arbitrarily disadvantage certain groups – otherwise it could be challenged as breaching data protection and equality laws.

Healthcare AI decisions: In healthcare, AI is used for diagnostic tools, risk scoring (e.g. predicting hospital readmissions), and resource allocation. If an AI system were to automatically decide, say, patient priority for treatment without clinician input, that could engage Article 22 (a decision with significant effect on a person’s health outcome). In practice, medical AI tends to be assistive (providing recommendations to doctors rather than final decisions without human confirmation). This is partly due to professional standards, but also legally it avoids the solely automated decision issue. Still, patients have rights to know if AI is influencing their care. The GDPR’s transparency and access rights would require, for example, that a patient can be informed that an AI system was used in their diagnosis and, if they ask, get information on what data was analysed. Research suggests Article 22 could even give patients a right to refuse purely AI-made decisions in favour of a human-reviewed decision in some scenarios (analogous to a second opinion). The tension here is between efficiency (AI might process faster or more objectively) and individual rights (the desire for human judgment and accountability in life-affecting matters).

Enforcement and compliance examples: Regulators in Europe have not shied away from enforcing GDPR in AI contexts. For instance, the Italian Data Protection Authority (Garante) in 2021 found that a food delivery company’s algorithmic rider management system violated GDPR provisions. The platform, which assigned jobs and evaluated gig workers through automated means (“profiling” their performance and reliability), was deemed to have breached principles of transparency and fairness and had inconsistencies that could lead to unjust treatment of workers. The company was ordered to change its practices and received a fine. While this was under EU GDPR, the UK (having mirrored rules) would likely view similar cases the same way. The case illustrates how data protection law can serve as a tool to address algorithmic management and labour issues, ensuring that workers are not subject to unaccountable automation. In another instance, the UK’s ICO fined a company using facial recognition cameras in malls to profile visitors without proper consent or transparency, deeming it unfair and unlawful processing of personal data. Although not a decision-making system per se, it shows the ICO’s willingness to intervene in novel AI uses of personal data.

Overall, these practical contexts reveal a pattern: organisations often try to balance efficiency gains from AI with compliance by introducing some human oversight or by being cautious in deployment. The effectiveness of data protection law in practice depends on how seriously organisations take their obligations and how enforceable the rights are. Individuals may not always be aware an algorithm made a decision about them, which can limit the exercise of rights. This is why the law’s emphasis on proactive transparency and assessments (like DPIA) is crucial – to surface issues before harm occurs. Nevertheless, even with compliance efforts, AI technologies present significant challenges to the existing data protection framework, as discussed next.

Challenges in Applying Privacy Principles to AI

Applying traditional privacy and data protection principles to AI-driven technologies is challenging for several reasons:

1. Opacity and Explainability: Many AI models, especially those based on machine learning and neural networks, operate as “black boxes” where the decision logic is not readily interpretable by humans. Yet, data protection law requires a degree of transparency and explanation. Organisations struggle to provide meaningful information about the logic of processing (UK GDPR Articles 13–15) when even the developers cannot fully explain an AI’s complex decision rules. This leads to uncertainty about compliance: what level of explanation satisfies the law? There is ongoing debate about the so-called “right to explanation.” While the GDPR does not explicitly guarantee a detailed explanation for each decision, it does mandate that general information about the algorithmic logic and the significance of processing be given. In practice, providing simplified explanations or using proxy reasoning (e.g., “you were denied a loan because your credit score was below the threshold, which takes into account payment history and income”) may have to suffice. Researchers and the ICO have encouraged techniques for explainable AI to bridge this gap, but it remains a technical and legal challenge.

2. Human Oversight and the “Solely Automated” Threshold: As discussed, one way organisations try to avoid Article 22’s restriction is by ensuring a human is involved. However, distinguishing a truly automated decision from a partly human one is not always clear-cut. If a human simply follows an AI’s recommendation 99% of the time, is that really providing protection to the individual? Data protection authorities have warned that meaningful human involvement is needed – the human must have authority and competence to change the outcome, and must actually consider the case. This is easier said than done, especially in high-volume decision processes where the whole point of AI was to eliminate human labor. Ensuring that humans do not become complacent or that they have the expertise to question an AI is both a practical and cultural challenge within organisations. Moreover, current law does not explicitly regulate partially automated decisions – Article 22 applies only to decisions “based solely” on automated processing. This means if there is any nominal human involvement, the strict rule may not apply. Some critics see this as a loophole that needs closing, since many AI deployments operate in a hybrid manner. The proposal to broaden the definition of automated decision-making to cover partly automated processes (advocated by the ICO and others) reflects this concern.

3. Bias, Discrimination and Fairness: Ensuring AI decisions are fair and non-discriminatory is a major challenge. AI systems can inadvertently perpetuate or amplify societal biases present in training data. While the UK GDPR embodies fairness as a principle and Recital 71 warns against discriminatory effects of automated processing, the law does not provide a detailed algorithmic bias test. It relies on controllers to assess and prevent discrimination, and individuals can invoke data protection rights or Equality Act rights if they suspect bias. Identifying bias requires technical audits and access to data, which individuals usually lack. In the Bridges facial recognition scenario, one concern raised was the higher false-match rate for people of certain ethnic minorities, which could lead to unfair targeting – a fairness and equality issue tied to data processing. The Court noted that the police had not fully investigated demographic bias in the technology, hinting at the expectation that such issues should be part of a data protection impact assessment (since bias affects fairness and could make processing unlawful if it leads to unjustified discrimination). Going forward, integrating anti-discrimination checks into AI design (sometimes called “algorithmic fairness” measures) is essential to meet the fairness requirement. But it’s challenging because fairness itself can be defined in multiple ways, and balancing fairness with accuracy or other goals is complex. From a legal standpoint, if an AI consistently disadvantages people with a protected characteristic (say, female applicants consistently get lower credit limits than male applicants with similar profiles), that could be indirect discrimination under the Equality Act 2010, and also arguably a breach of GDPR fairness. The interplay of data protection and equality law is still evolving in relation to AI – complainants might use both avenues to challenge biased algorithms.

4. Data Minimisation vs Big Data: AI development often seeks large datasets to improve model performance. However, data minimisation requires limiting data collection and retention. Organisations might be tempted to collect extensive personal information “just in case” it improves an AI model’s accuracy. Doing so without clear necessity violates the GDPR. For example, an AI recruitment tool might be fed social media data of candidates on the theory it could correlate interests with job success – but unless it’s proven necessary, that could be an excessive use of personal data beyond what is relevant to the job application. Similarly, once an AI model is trained, storing all the raw personal data indefinitely is discouraged; techniques like data anonymisation or aggregation should be used where possible, though true anonymisation can be difficult if data is high-dimensional. The challenge is finding a balance between data-driven innovation and compliance with minimisation and purpose limitation. Controllers often address this by declaring broad purposes (which can dilute purpose limitation) or by obtaining consent for wider use of data (though consent must be specific and informed – hard to achieve if future AI uses are uncertain).

5. Consent and Capacity to Consent: One way to legitimise AI processing and automated decisions is explicit consent (Article 22(2)(c) and Article 6(1)(a)). However, obtaining valid consent for complex AI operations is fraught with issues. Consent must be informed and freely given. If individuals do not understand the AI processing (which is often the case, given complexity), can their consent truly be informed? There is also often a power imbalance – e.g., a job seeker asked to consent to AI evaluation as part of an application may not feel free to refuse. In many AI contexts, consent is either impractical or ethically questionable (e.g., consenting to being subject to surveillance algorithms). As a result, consent is not a panacea for AI compliance. Organisations frequently rely on other lawful bases like legitimate interests or contract necessity, but if those are stretched beyond their intent, they could be challenged. For instance, claiming an automated decision is “necessary for a contract” when it’s more a matter of convenience could be disputed by regulators.

6. Enforcement and Detection: Another challenge is that individuals may not even be aware that an AI system made a decision about them, or on what grounds. Unlike a human decision-maker, an algorithm doesn’t explain itself unless programmed to. While the law mandates notice, in practice these notices may be buried in privacy policies or not provided at all if organisations are negligent. This leads to under-enforcement – people can’t exercise rights or complain if they don’t know what’s happening. The ICO and other regulators are trying to improve accountability by requiring documentation (Article 30 records, DPIAs) that they can audit, but regulators have limited resources relative to the explosion of AI uses. Proving that a company’s “human in the loop” was not meaningful, or that an AI was inherently unfair, can also be an evidentiary challenge. It often requires expert analysis. This challenge suggests that data protection law alone, which largely operates on a complaint or investigation basis, may not catch all problematic AI practices, unless supplemented by sector-specific oversight or whistleblowers.

7. Evolving technology outpacing law: AI technology is advancing rapidly (e.g., the rise of complex generative AI and advanced analytics), potentially outstripping the fine details of current laws. The GDPR was drafted with awareness of profiling and algorithms (hence Article 22), but new AI capabilities (like deep learning models that can infer sensitive attributes or affect large groups of people) raise questions. For instance, if an AI profiles not an individual but a group (group privacy issues), or if it dynamically personalises content in a way that affects someone’s behaviour without a clear decision point, how does the law apply? These grey areas demonstrate that the law might need continuous updates or guidance to remain effective.

In light of these challenges, there have been calls for reform or clarification of the law, as well as the introduction of new regulatory frameworks tailored to AI. The next section discusses how the UK is approaching reforms in this area.

Current Reforms and Proposed Changes

Recognising both the importance of AI to innovation and the concerns about its risks, the UK government has been reviewing the data protection framework post-Brexit. The most notable reform proposal is the Data Protection and Digital Information Bill (DPDI Bill) (sometimes referred to as the DPDI Bill No. 2 in its updated form, 2023). This Bill (currently proceeding through Parliament) proposes a series of amendments to the UK’s data protection regime, including significant changes to the rules on automated decision-making.

Under the Bill’s proposals (Clause 11 in the latest draft), Article 22 of the UK GDPR would be re-framed. Instead of the current general prohibition on solely automated decisions with significant effects (subject to narrow exceptions), the law would explicitly permit automated decision-making by default, provided certain safeguards are in place. In effect, the reform aims to move from a model of “ban in principle, allow by exception” to a model of “allow in principle, but regulate with safeguards”. The government’s rationale is to remove potential barriers to beneficial AI and data use, in line with a pro-innovation stance, while still protecting individuals.

Concretely, the DPDI Bill would repeal or replace Article 22’s blanket right and introduce new provisions in the DPA 2018 defining “automated decision-making” and required safeguards. For example, the Bill introduces terminology like “significant decision” (to mean a decision with legal or similarly significant effect on an individual) and “relevant automated decision-making”. A significant decision could be fully automated or partially automated. The Bill would explicitly allow such decisions if controllers put in place specified safeguards, such as the ability for individuals to request human review. In practice, it appears individuals would no longer have an absolute right to avoid automated decisions, but they would have a right to certain protections when those decisions are made. The proposed safeguards echo existing ones – the individual would have to be informed and given avenues to challenge the outcome – but the crucial difference is the burden of compliance shifts. Currently, a company has to justify falling under an Article 22(2) exception to do automated processing; under the reform, a company would be generally free to automate as long as it follows the safeguard procedures, and it would fall to individuals or regulators to show if something went wrong or if safeguards weren’t met.

Additionally, the Bill empowers the Secretary of State to further define key phrases like “meaningful human involvement” and “similarly significant effect” through regulations. This could provide much-needed clarity – for instance, setting criteria for what counts as a genuine human oversight in a decision process, or what kinds of impacts are considered “significant” (some guidance might be given on borderline cases, like personalised advertising or content recommendation, which might or might not be deemed significantly affecting someone). Such definitions would help organisations and courts apply the law more consistently, albeit there is concern that giving ministers power to redefine these terms could be used to narrow protections over time without full parliamentary scrutiny (though the Bill requires affirmative resolution for these regulations, meaning Parliament must approve them).

Reactions to the proposed reforms have been mixed. Industry groups generally welcome more flexibility for automated processing, seeing Article 22’s current form as rigid and uncertain. However, civil society organisations like the Ada Lovelace Institute and the Equality and Human Rights Commission (EHRC) have voiced concerns that weakening the default protections could expose individuals to greater harms. The Ada Lovelace Institute argued that Article 22’s quasi-requirement of human involvement in important decisions is a “first line of defence” against AI harms and that removing it tilts the balance too far towards data users at the expense of data subjects. Similarly, the EHRC warned that expanding automated decision-making without robust safeguards might lead to more discriminatory outcomes or make it harder for individuals to contest algorithmic injustices. Essentially, if organisations are “assumed to be compliant” as long as they tick procedural boxes, individuals might bear the burden of detecting and proving failings after the fact, which is onerous.

On the other hand, the reform does retain safeguards—no one is suggesting a free-for-all. And some changes could be positive: for instance, extending coverage to partially automated decisions would close the loophole we noted. If the final law explicitly covers significant decisions even where a human was nominally involved, it could prevent organisations from evading accountability by using minimal human oversight. Clearer definitions via regulation could also help ensure consistency in enforcement (for example, providing examples of what “meaningful human involvement” must entail could deter sham reviews).

Beyond the DPDI Bill, the UK’s approach to AI governance is also evolving through other initiatives. The government published an AI White Paper in 2023 outlining a framework for AI regulation focused on context-specific guidance by sector regulators rather than a single AI law (unlike the EU, which is developing a broad AI Act). Data protection is recognised as one of the key existing regimes relevant to AI, but the White Paper suggested a light-touch, principles-based approach to encourage innovation. This suggests that, in the immediate future, changes to data protection law (like the DPDI Bill) will be the main legally binding reforms specifically impacting AI decision-making, while other AI governance measures might be more advisory or best-practice oriented.

It is worth noting that even without legislative reform, guidance can shape how the law is applied. The ICO may update its guidance on AI and on automated decision-making in light of new technologies and any law changes. The ICO has indicated support for some reforms (such as covering partially automated decisions, as mentioned earlier) but will also likely emphasise that core principles like fairness and transparency must not be diluted. Should the reforms pass, organisations will need to adapt their compliance processes – for instance, maintaining records of safeguards for each AI system, and perhaps handling more requests from individuals who contest automated decisions now that such decisions will be more common.

Conclusion

The regulation of AI under UK data protection law is a dynamic and critical area of law in the digital age. The current framework, centred on the UK GDPR and DPA 2018, imposes important limits on automated decision-making and profiling, seeking to ensure that individuals are not unfairly subjected to opaque or unchallengeable algorithmic decisions. Key GDPR provisions like Article 22, along with the data protection principles and individual rights, provide a legal toolkit to address many of the risks associated with AI – from requiring transparency about algorithms, to giving individuals the right to object or demand human intervention, to obliging organisations to assess and mitigate risks (via DPIAs and fairness measures).

However, applying these rules to real-world AI systems has proven complex. AI technology tests the boundaries of concepts like explanation, consent, and fairness that data protection law rests upon. The experience of recent years, through case studies and enforcement actions, shows both the strengths and limitations of the law. It has been a useful mechanism to challenge egregious examples (such as discriminatory algorithms or lack of transparency), yet it can also be circumvented or may not directly address every concern (for instance, it does not guarantee algorithmic accuracy or absence of bias, it only mandates processes to strive for those ends).

The UK is at a crossroads, contemplating reforms to its data protection regime in the context of AI. The proposed Data Protection and Digital Information Bill would mark a significant shift by relaxing the strict ban on automated decisions in favour of a more permissive, but safeguard-driven, approach. Whether this will improve the regime or undermine individual rights is the subject of debate. Proponents argue it will enable beneficial AI uses and reduce bureaucratic hurdles, whereas critics fear it may open the door to more unchecked algorithmic decision-making, putting vulnerable individuals at risk of unseen bias or error with less recourse.

Ultimately, the challenge for lawmakers and regulators is to strike the right balance: protecting individuals’ fundamental rights to privacy, non-discrimination, and due process in an era of AI, without stifling innovation that could also benefit society. The direction of travel in the UK suggests a cautious loosening of certain rules but coupled with clarifications intended to maintain protections in a more nuanced way. Even as the statutory rules evolve, the underlying principles from data protection – fairness, transparency, accountability – will remain crucial benchmarks for any AI deployment. Organisations building or using AI in the UK should keep those principles at the core of design and operation.

In conclusion, UK data protection law, reinforced by possible upcoming reforms, is and will continue to be a key legal framework governing AI and automated decision-making. It may not answer all ethical and societal questions posed by AI, but it provides a foundation to ensure that as we embrace AI technologies, we do so with respect for individual rights and human-centred values. The law on automated decision-making will likely continue to adapt in response to both technological changes and societal expectations, aiming to ensure that “the machines” remain our tools, not our masters, in decision-making processes.

References

  • Ada Lovelace Institute (Davies, M. 2024) It’s time to strengthen data protection law for the AI era (Blog, 18 March 2024).
  • Data Protection Act 2018 (c.12, UK).
  • Data Protection and Digital Information Bill (No. 2) 2023 (UK, as introduced).
  • General Data Protection Regulation (EU) 2016/679 (UK GDPR as retained in UK law) – particularly Articles 4, 5, 13-17, 21-22, 35.
  • Information Commissioner’s Office (ICO) (2020) Statement on A-level results and data protection, ICO.org.uk, 2020.
  • Information Commissioner’s Office (ICO) (2022) Guidance on AI and Data Protection (contains ICO and Alan Turing Institute guidance on explaining AI decisions).
  • R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058 (Court of Appeal).
  • Pinsent Masons (Lees, S. 2020) “Exam results put reliance on algorithms in the spotlight”, Out-Law News, August 2020.
  • Article 29 Working Party (2018) Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01, endorsed by the European Data Protection Board).
  • Equality and Human Rights Commission (2023) EHRC Memorandum on the Data Protection and Digital Information (No.2) Bill (23 May 2023) – sections 3.1-3.6 on automated decision-making.
  • Court of Amsterdam (2021) Bol et al. v Uber BV (unofficial English summary) – ruling on automated decision-making and data access rights of gig workers under GDPR.
  • GDPR Recital 71 – on safeguards against discriminatory effects of profiling.

Article by LawTeacher.com