LawTeacher.com

The UK’s approach to AI Regulation: principles vs prescriptive rules

April 28, 2025

The House of lords, one of the two houses where AI regulation would be debated and made

Artificial Intelligence (AI) – defined broadly as machines performing tasks that typically require human intelligence – has become a focal point for regulators worldwide. The rapid advancement of AI technologies presents both enormous opportunities and novel risks. This has spurred a global debate on how best to govern AI systems. On one side, some jurisdictions have pursued comprehensive, prescriptive legislation setting detailed rules for AI. On the other, the United Kingdom (UK) has favoured a more flexible, principles-based approach to AI governance. The UK Government’s current stance is explicitly “pro-innovation,” aiming to nurture AI development by avoiding overly rigid regulations. Instead of a single omnibus new AI law, the UK relies on existing laws and sectoral regulators to manage AI-related issues. This essay examines the UK’s principles-based approach to AI regulation, discusses the Government’s pro-innovation regulatory stance and use of existing legal frameworks, and analyses how this differs from the creation of a single comprehensive AI statute as seen elsewhere (for example, the European Union’s approach).

Background

Regulating emerging technology like AI is a delicate balance between fostering innovation and protecting society. Broadly, regulatory approaches can be characterised as principles-based or rules-based. A principles-based approach sets high-level guiding principles or outcomes that must be achieved, allowing flexibility in how those principles are met. In contrast, a prescriptive rules-based approach imposes specific, detailed requirements or prohibitions on conduct. The UK has a regulatory tradition that often leans toward flexibility and outcome-focused principles. For example, the Regulators’ Code 2014 – a statutory code of practice for UK regulators – explicitly advocates a “clear, flexible and principles-based framework” for regulation​. This reflects a general philosophy in the UK that regulation should minimise burdens on innovation while still upholding standards.

In the context of AI, the UK’s inclination for a principles-driven strategy has been influenced by its legal system and economic ambitions. The UK is a common law jurisdiction, accustomed to evolutionary development of law through cases and high-level statutes rather than exhaustive civil codes. This legal culture aligns with adapting existing laws to new scenarios like AI. Additionally, the UK’s National AI Strategy 2021 set out the vision to maintain the country’s status as a global AI leader with a “progressive regulatory… environment” conducive to innovation​. Following its departure from the European Union (EU), the UK has sought to distinguish its approach from the EU’s more interventionist regulatory style. Rather than mirror the EU’s path of new expansive AI-specific legislation, the UK Government signalled it would leverage the freedom to regulate in a way that suits the UK’s innovation ecosystem. In July 2022, a policy paper outlined the “emerging proposals” for regulating AI, stressing “innovation-friendly and flexible” regulation to both unleash growth and safeguard fundamental values​. This set the stage for a uniquely British approach to AI governance, grounded in flexibility and existing legal principles.

At the same time, the need for AI oversight is well recognised. AI can implicate various public interests – safety, privacy, equality, consumer protection, human rights – which have traditionally been protected by different branches of law. The UK Government acknowledges that AI does not operate in a lawless vacuum; even in the absence of AI-specific statutes, a range of existing laws already apply to AI technologies. For instance, if an AI system processes personal data, it must comply with the Data Protection Act 2018 (DPA 2018), which implements stringent principles for fair and lawful processing of personal information. Likewise, an AI tool used in hiring or lending must not discriminate unlawfully, as per the Equality Act 2010 which prohibits indirect discrimination (practices that disadvantage protected groups unjustifiably). A consumer-facing AI product or service must meet quality and safety standards under laws like the Consumer Rights Act 2015 (which implies that goods and digital content be of satisfactory quality and fit for purpose) and the product liability regime in the Consumer Protection Act 1987. If AI is used by a public authority, its use can be challenged under the Human Rights Act 1998 for breaching rights such as privacy or due process. In specific sectors, further tailored regulations exist – for example, financial services legislation (the Financial Services and Markets Act 2000) and regulators like the Financial Conduct Authority impose requirements on algorithms used in finance (such as treating customers fairly), and medical devices law covers AI-driven diagnostic tools. Thus, before introducing any new AI law, the UK already has a patchwork of legal controls addressing many AI-related issues. The question has been whether this patchwork is sufficient and how to coordinate it effectively for AI, versus creating a single comprehensive framework.

Current Approach: UK’s Pro-Innovation, Principles-Based AI Governance

The UK Government’s current approach to AI regulation is encapsulated in its AI Regulation White Paper 2023, titled “A pro-innovation approach to AI regulation” (Department for Science, Innovation and Technology (DSIT), 2023). This White Paper explicitly rejects a rigid, blanket regulatory regime at this stage. Instead, it outlines a flexible, principles-led framework that builds on existing law and empowers existing regulators. At the heart of this framework are five overarching principles intended to guide the development and use of AI across all sectors of the economy​. These core principles are: (1) Safety, security and robustness – ensuring AI systems operate reliably and do not pose undue risks; (2) Appropriate transparency and explainability – AI decisions should be sufficiently transparent or interpretable; (3) Fairness – AI should not create unjust outcomes or unlawful discrimination; (4) Accountability and governance – there should be appropriate oversight of AI systems, with clear responsibility for outcomes; and (5) Contestability and redress – people should have avenues to dispute harmful AI outcomes or decisions. By articulating these high-level principles, the UK aims to provide a common direction to all regulators and AI developers, without enshrining highly detailed technical rules in statute (at least initially).

Crucially, the Government decided not to put these principles into legislation immediately. The White Paper states that introducing “new rigid and onerous legislative requirements on businesses could hold back AI innovation”​. In other words, codifying detailed AI rules in law right now is seen as potentially stifling to the fast-moving AI sector. Instead, the principles are to be issued on a non-statutory basis, with implementation delegated to existing regulators within their domains​. Regulators are expected to interpret and apply the principles in ways appropriate to their sectors. For example, the Information Commissioner’s Office (ICO) can embed the transparency principle when guiding use of personal data in AI under data protection law, and the Medicines and Healthcare products Regulatory Agency (MHRA) can apply the safety and explainability principles in approving AI-driven medical devices. This sectoral, context-based regulatory method takes advantage of regulators’ expertise in different fields. It also allows for agility: as AI technology evolves rapidly, regulators can update their guidance or expectations without waiting for Parliament to pass new legislation each time. The Government describes this as an “iterative” approach that can adapt and learn from experience, with a central coordination function to monitor how it’s working​.

The Government’s pro-innovation stance is further reflected in its emphasis on coordination over control. Rather than establishing a single new “AI regulator,” the UK is relying on coordination mechanisms among existing bodies. A new central AI risk monitoring and coordination function is being set up within government to support this framework. Initiatives like the Digital Regulation Cooperation Forum (DRCF) – a coalition of regulators for digital issues (including the ICO, Office of Communications (Ofcom), Competition and Markets Authority (CMA), and others) – are being leveraged to ensure regulators share knowledge on AI and avoid inconsistent or duplicate requirements. For instance, the DRCF has launched an “AI and Digital Hub” to provide a joint advisory service for innovators facing multi-sector regulatory questions​. The idea is that an AI developer shouldn’t have to navigate conflicting rules from different regulators; the principles create a common thread, and regulators will collaborate on overlapping issues (such as an AI product that raises both data privacy and competition concerns).

Importantly, the UK approach leans on existing legal frameworks to govern AI, with targeted adjustments where necessary. The Government explicitly intends to “leverage and build on existing regimes, maximising the benefits of what we already have”​. In practice, this means that current laws are the first line of defence for AI-related risks. For example, the Data Protection Act 2018 already imposes obligations relevant to AI, such as transparency to data subjects and conducting Data Protection Impact Assessments for high-risk processing (required by UK GDPR provisions). Notably, that Act gives individuals the right not to be subject solely to automated decision-making with significant effects (with some exceptions), ensuring a human review in important AI-driven decisions (Data Protection Act 2018, s.14; UK GDPR Article 22). Likewise, the Equality Act 2010 provides a route to challenge AI systems that result in unlawful bias – a point underlined in the case of R (Bridges) v South Wales Police [2020]. In that case, the Court of Appeal examined the police’s use of live facial recognition technology under existing law. The court found that the deployment of the AI system breached privacy rights and data protection requirements, and that the police had failed in their duty under Equality Act 2010 s.149 (the Public Sector Equality Duty) to consider how the algorithm might be biased against racial or gender groups​ohrh.law.ox.ac.uk. This landmark ruling demonstrated that even in the absence of an AI-specific statute, the UK’s general laws (on human rights, data protection, and equality) can and do govern AI use by public authorities. It set a precedent that any public body using AI must take steps to ensure compliance with those existing obligations – for example, by independently testing algorithms for bias (as the court in Bridges expected) and by providing sufficient legal basis and safeguards for privacy.

Beyond the public sector, similar reliance on existing law applies to private actors. If an employer uses an AI tool for recruitment that inadvertently discriminates, it risks a claim under Equality Act 2010 (for example, indirect discrimination under s.19). If a company’s AI product is defective and causes harm, consumers can seek redress under the product liability provisions of the Consumer Protection Act 1987 or sue in negligence. If an AI-powered service fails to meet the standards advertised, consumers have contractual remedies under the Consumer Rights Act 2015. Financial services firms deploying AI in algorithmic trading or credit scoring remain fully subject to regulations under the Financial Services and Markets Act 2000 and oversight by the Financial Conduct Authority – the use of AI does not exempt them from treating customers fairly or managing risks. In essence, the UK’s message is that AI is regulated already by a constellation of general laws; the task is to clarify and update the application of those laws to AI rather than to write an entirely new rulebook. To assist with this, the Government is encouraging regulators to issue AI-specific guidance within their remit. Indeed, some regulators have been proactive. The MHRA, for example, published guidance and a roadmap in 2022 on AI and software as medical devices, clarifying how existing medical device regulations apply to AI innovations​. The MHRA is also working on guidelines for AI transparency in health, in line with the cross-sector AI principles, to ensure patient safety and trust are maintained without new primary legislation​. This approach – adapting current regulatory frameworks incrementally – aligns with the UK’s preference for outcomes-focused regulation that can evolve.

The Government’s pro-innovation ethos is further evident in its reluctance to impose heavy-handed enforcement at this stage. The AI White Paper proposes initially a voluntary and guidance-based compliance with the principles. During an initial period (a couple of years), regulators will experiment with implementing the principles and report on progress. The Government plans to monitor whether this non-statutory framework is effective or if industry is ignoring the principles​. Only after this period, “when parliamentary time allows,” does the Government anticipate possibly introducing a statutory duty on regulators to have due regard to the AI principles​. Such a duty, if enacted, would modestly strengthen the framework by legally requiring regulators to consider the principles, but it is far from a comprehensive AI law directly imposing obligations on AI developers or users. Even this step will be taken only if needed – the White Paper makes clear that if voluntary adoption works well, the preference is to avoid additional legislation​. This calibrated approach reflects regulatory humility in the face of a fast-evolving technology: rather than rushing to legislate broad rules that might quickly become obsolete or overly restrictive, the UK is testing a lighter-touch model and keeping law-making in reserve. It also aligns with the Government’s intent to make Britain an attractive place for AI businesses. Tech companies often caution that premature or overly strict regulation can chill innovation. The UK’s approach has been applauded by many in industry – indeed, companies at the forefront of AI like DeepMind and OpenAI publicly endorsed the UK’s flexible framework​. They see it as pragmatic and supportive of responsible innovation, in contrast to more onerous regimes.

That said, the UK Government is not oblivious to risks. It has coupled its light regulatory approach with other measures to manage AI risks outside of legislation. For example, the UK hosted the AI Safety Summit in late 2023 (at Bletchley Park), bringing together countries and companies to agree on principles for frontier AI safety. The Government also announced the creation of an AI Safety Institute to fund research into AI risks​. Additionally, in its response to consultation on the AI White Paper, the Government floated the idea of future targeted, binding obligations for developers of the most powerful general-purpose AI systems (such as advanced foundation models), given the potentially cross-sectoral and significant risks those pose​. This indicates that while the general stance is to avoid prescriptive rules, the UK is considering narrow, specific interventions if certain AI capabilities demand it. Such measures would be akin to a safety net – ensuring that the few organisations building cutting-edge AI (e.g. large language model developers) are accountable for safety, perhaps through mandatory testing or transparency requirements, even as broader AI use remains under principle-based oversight. As of 2025, however, these ideas are in development and no specific new AI statute has been enacted. The prevailing approach remains one of regulatory incrementalism: start with guidance and voluntary compliance, reinforce with existing legal duties, and only legislate new rules if absolutely necessary.

Comparison with Comprehensive Statutes in Other Jurisdictions

The UK’s approach stands in marked contrast to the path chosen by some other jurisdictions, most notably the European Union. The EU has pursued a comprehensive, prescriptive legal instrument for AI – the Artificial Intelligence Act. In 2021, the European Commission proposed the Artificial Intelligence Act, and by 2024 it was formally approved as Regulation (EU) 2024/1689 (the “EU AI Act”), making it the world’s first horizontal framework law for AI systems (European Union, 2024). This EU AI Act takes a rules-based, codified approach: it defines “AI systems” in legislation and lays down detailed obligations that apply across all EU member states. The core of the EU Act is a risk-tiered model: AI systems are classified into tiers such as unacceptable risk (prohibited uses), high-risk (allowed but heavily regulated), and lower risk (subject to lesser requirements). For instance, AI applications that pose unacceptable risk to fundamental rights or safety (like social scoring by governments or certain forms of real-time biometric surveillance) are outright banned by the Act. High-risk AI systems (a category including AI used in safety-critical areas like medical devices, or in deciding access to employment, education, credit, public services, etc.) are permitted only if they comply with a strict set of prescriptive requirements. These requirements, set out in the legislation, include: conducting risk and impact assessments before deployment; using high-quality training data to minimise bias; ensuring human oversight of the AI’s operation; maintaining extensive technical documentation and logs for traceability; and undergoing conformity assessments (in some cases with external audits or notified bodies) before placing the system on the market. The Act also mandates transparency measures for certain AI, such as requiring that users be informed when they are interacting with an AI system (for example, if a chatbot or deepfake is involved). Non-compliance can attract significant fines – the EU AI Act prescribes penalties up to €30 million or 6% of global turnover for the most serious breaches (mirroring the tough enforcement style of the EU’s GDPR).

This EU approach exemplifies the single comprehensive statute model: it attempts to cover the field of AI regulation within one uniform law. The law’s provisions are detailed and leave less to case-by-case interpretation. In effect, the EU has chosen to explicitly write many AI governance principles into black-letter law and to create specific legal duties for AI developers, deployers, and users. For enforcement, the EU AI Act requires each member state to designate national supervisory authorities for AI (similar to data protection authorities under GDPR) to oversee compliance, handle registries of high-risk AI, and issue fines. It is a top-down regulatory strategy that seeks to harmonise AI rules across Europe and provide legal certainty about what is allowed or not. Crucially, the EU’s approach is precautionary – aiming to pre-empt harms by imposing requirements ex ante (before or at the point of AI system deployment). For example, before a high-risk AI system (say, a recruitment algorithm) can be used, the provider must ensure it passes a conformity assessment and meets the technical standards set by the Act. This is a markedly different starting point from the UK’s, which is to leverage existing ex post mechanisms (like after-the-fact accountability via existing law) and flexible guidance, rather than upfront compliance checkpoints for AI systems in general.

The divergence between the UK and EU approaches can be understood in light of regulatory philosophy and timing. The EU tends to favour legal certainty and uniformity through legislation – even if that means the legislation is complex and requires periodic updates. The GDPR for data protection is a prime example of the EU setting a detailed rulebook with global influence. Now with AI, the EU is taking a similar route: a comprehensive statute to set common standards (and indeed the EU AI Act has already influenced discussions in other countries about setting their own AI laws). The benefits of this approach are that it directly addresses specific risks (e.g. banning AI that is deemed too dangerous like social scoring systems, or ensuring AI used in critical areas meets minimum safety criteria) and creates a level playing field across the single market. Companies get a clearer checklist of obligations for high-risk AI, and fundamental rights considerations are woven explicitly into the regulatory requirements. However, the potential downsides are a lack of flexibility and the risk of stifling innovation, especially for startups. Complying with the EU AI Act’s detailed rules can be costly and time-consuming – critics argue it could inhibit smaller firms or deter them from deploying AI in Europe due to compliance burdens. The rules might also become quickly outdated if AI technology outpaces the regulatory definitions or if unanticipated types of AI emerge that weren’t envisioned when drafting the law. The EU has tried to mitigate this by making the Act relatively future-proof (the broad definition of AI and provisions to update the high-risk categories by delegated acts), but any legislative process is inherently less nimble than guidance.

In contrast, the UK’s principles-based approach offers agility and a lighter touch in the short term. It trusts sectoral regulators and the common law process to adjust and enforce as needed. This can encourage experimentation and growth in the AI sector without the immediate overhead of compliance departments ensuring multi-step legal conformity. The UK approach is also contextual: rather than one-size-fits-all rules, it allows regulation to be context-specific. For example, AI in healthcare can be governed through existing medical safety regulations plus the new AI principles, which might result in different practical requirements than AI in finance governed through financial conduct rules plus the same principles. Proponents of this approach say it avoids the pitfall of over-regulating or mis-regulating by assuming all “AI” can be addressed uniformly – instead it recognises that what is appropriate for an AI medical diagnosis tool may not be appropriate for an AI songwriting tool. The flexibility can also better accommodate incremental improvements; if a particular risk becomes urgent (say, a new kind of AI-enabled fraud), UK regulators can issue guidance or use existing powers to respond quickly, without waiting for a new law.

However, the UK’s choice comes with its own challenges. One concern is fragmentation and uncertainty: if each regulator interprets principles in their own way, businesses operating across sectors might face a mosaic of guidelines. The Government is attempting to coordinate, but without a single rulebook, some inconsistency is almost inevitable. In the absence of clear statutory rules specific to AI, companies may be unsure of their precise legal obligations, which could in itself dampen confidence (the very thing the UK seeks to foster). There is also the issue of enforceability. Principles and guidance lack the hard edge of law. A company flouting a non-statutory AI principle does not face direct penalties unless that conduct also breaches an existing law. For example, if an AI system is opaque (thus violating the transparency principle) but that opacity does not infringe data protection or consumer law, there is currently no explicit legal penalty just for being non-transparent. The EU approach, by contrast, would make certain transparency a binding duty in many cases (with consequences for non-compliance). Therefore, the UK’s reliance on existing laws could leave some regulatory gaps if AI brings novel issues not neatly caught by current statutes. The Government’s own White Paper acknowledged that some AI risks “arise across, or in the gaps between, existing regulatory remits”​. For instance, consider AI systems that evolve in real-time or general-purpose AI that cuts across sectors – these might not be fully addressed by any one regulator’s traditional scope. The Government is aware that its context-based framework is tested by such cross-cutting AI, which is why it is contemplating future targeted rules for foundational AI models​. By contrast, a comprehensive statute could directly cover cross-sector AI by imposing baseline requirements universally.

Other jurisdictions provide further points of comparison. Canada, for example, has been working on an Artificial Intelligence and Data Act (as part of Bill C-27) which, like the EU’s approach, would set obligatory requirements for AI systems identified as high-impact and establish an AI regulator to monitor compliance. China has implemented several regulations targeting AI, such as rules on recommendation algorithms (2022) and generative AI services (2023), which mandate things like algorithmic transparency and content control – again a more prescriptive approach, albeit focused on specific AI sub-domains. The United States so far has not enacted federal AI legislation comparable to the EU’s; instead, it relies on existing laws (much like the UK) – for instance, the Federal Trade Commission uses consumer protection law to go after deceptive AI practices, and sectoral regulators (like the National Highway Traffic Safety Administration for autonomous vehicles) oversee AI within existing mandates. The US has issued soft-law guidance, such as the AI Bill of Rights (Blueprint) and the NIST AI Risk Management Framework, but these are not binding. In a way, the US approach at present is analogous to the UK’s in emphasising guidelines and existing regulation over new laws, though this may change with ongoing policy discussions. The EU’s AI Act thus represents the clearest example of the comprehensive statute model. The divergence between the EU and UK approaches post-Brexit is significant: organisations operating in both jurisdictions will have to navigate two very different regulatory philosophies. A company offering an AI product in London and in Paris, for example, will face minimal AI-specific legal duties in the UK (beyond general laws), but in the EU must ensure full compliance with the AI Act’s technical file, documentation, and possibly notification requirements.

From a legal perspective, the comprehensive statute approach brings clarity and consistency – stakeholders can consult one legal text to understand their obligations. It also signals strongly to the public that AI is being actively governed, potentially boosting public trust. The principles-based approach brings flexibility and adaptiveness, potentially resulting in more tailored and innovation-friendly oversight, but with a reliance on the slower evolution of case law and the diligence of regulators to fill in the details. Each approach has trade-offs in terms of innovation, protection, and administrative burden. The UK Government has consciously chosen the principles route in part to differentiate the UK as a hospitable environment for AI development. Officials have expressed that an overly stringent regime could drive AI talent and investment elsewhere, whereas a proportionate approach will “boost public trust and ultimately drive productivity across the economy” while keeping people safe​. Time will tell if this approach indeed strikes the right balance or if gaps in protection will emerge that force a course correction.

Conclusion

The UK’s approach to AI regulation, centred on flexible principles rather than prescriptive rules, reflects a regulatory philosophy of caution in legislating and confidence in existing legal mechanisms. By favouring guidance and leveraging current laws, the UK aims to foster an environment where AI innovation can thrive under the watch of adaptable, context-sensitive governance. The Government’s “pro-innovation” stance is evident in its decision not to rush into broad new AI laws, instead entrusting sectoral regulators with applying high-level principles like safety, transparency, and fairness within their domains. This stands in contrast to comprehensive statutory regimes such as the EU’s Artificial Intelligence Act, which seeks to explicitly control AI risks through detailed legal mandates and prohibitions. The UK model offers agility and avoids burdening businesses with new compliance obligations at a nascent stage of technology development. It also capitalises on the fact that many aspects of AI are already addressed by laws on data protection, equality, consumer protection, and other areas – thus reinforcing the idea that AI should be governed as part of the existing fabric of law, not in isolation.

However, the UK’s principles-based approach is not without its challenges. It requires effective coordination among regulators and assumes that existing laws will be adequate to address AI’s novel challenges. There is a thin line between being light-touch and being lax: if the non-statutory framework fails to prevent serious harms or fails to provide certainty, pressure may mount for more concrete rules. The Government has sensibly built in review mechanisms and kept the door open to future legislation should it prove necessary (for example, a possible statutory duty on regulators, or targeted rules for frontier AI systems). In effect, the UK is testing a model of iterative governance – learning and adjusting as AI evolves – whereas other jurisdictions are locking in rules early. This experimental approach will need vigilant oversight. The success of the UK’s strategy will likely depend on regulators’ proactive guidance, industry’s goodwill in following principles, and the agility of legal institutions to respond to issues through case law and targeted updates.

In conclusion, the UK’s AI regulatory approach epitomises the debate of principles versus prescriptive rules in tech governance. The UK has chosen to start with principles, seeking to embed trust and responsibility in AI development without hindering innovation through over-regulation. This differs markedly from the creation of a single comprehensive AI statute as seen in the EU, which prioritises uniformity and precaution via detailed rules. Both approaches strive to ensure AI is developed and used safely and ethically, but they operationalise that goal in different ways. The UK’s path is an ambitious attempt to “have its cake and eat it” – to be a global AI hub with minimal new legal barriers, yet also to assure the public that AI is under oversight. The coming years (and the real-world performance of AI systems) will test whether a principles-based regime can deliver the accountability and safety that society expects. If it can, the UK may provide a model for a nimble alternative to heavy-handed regulation. If not, the pendulum may swing towards more prescription even in the UK. For now, the UK stands as a leading example of an innovation-centric approach to AI governance, illustrating the ongoing balancing act between innovation and regulation in the age of artificial intelligence.

References

Legislation and Cases:

Automated and Electric Vehicles Act 2018, c.18 (UK).

Consumer Protection Act 1987, c.43 (UK).

Consumer Rights Act 2015, c.15 (UK).

Data Protection Act 2018, c.12 (UK).

Equality Act 2010, c.15 (UK).

Financial Services and Markets Act 2000, c.8 (UK).

Human Rights Act 1998, c.42 (UK).

R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058 (Court of Appeal, England and Wales).

Regulation (EU) 2024/1689 (Artificial Intelligence Act). Official Journal of the EU, L 327/1 (21.7.2024).

Regulators’ Code 2014 (UK) – Statutory Code of Practice under Legislative and Regulatory Reform Act 2006.

Government and Policy Sources:

Department for Science, Innovation and Technology (DSIT). (2023). “A pro-innovation approach to AI regulation” (White Paper). London: DSIT (UK Government).

DSIT. (2023). “A pro-innovation approach to AI regulation: Government response to consultation.” London: UK Government.

Department for Digital, Culture, Media and Sport (DCMS) & Office for AI. (2022). “Establishing a pro-innovation approach to regulating AI” (Policy Paper, July 2022). London: UK Government.

UK Government. (2021). “National AI Strategy.” London: UK Government.

European Commission. (2021). “Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act)” COM(2021) 206 final, Brussels.

Article by LawTeacher.com