The European Union (EU) Artificial Intelligence Act – the world’s first comprehensive AI law – is poised to reshape the legal landscape for artificial intelligence (AI) systems. Although the United Kingdom (UK) is no longer in the EU, this new regulation will have significant indirect effects on UK law and businesses. Key areas of UK law likely to feel the impact include data protection, employment and anti-discrimination, consumer rights, and competition law. This essay examines the EU AI Act’s main provisions and timeline, and analyses its influence on the UK’s legal framework in these domains. It considers current UK AI-related legislation (such as the Data Protection Act 2018 and Equality Act 2010), recent developments up to February and August 2025, the UK government’s “pro-innovation” regulatory approach, and the potential divergence or alignment between UK and EU AI governance.
The EU AI Act: Overview and Timeline
The Artificial Intelligence Act (often called the EU AI Act) is a landmark EU regulation establishing a risk-based framework for AI. It categorises AI systems into tiers of risk – unacceptable, high, limited, and minimal – and imposes obligations accordingly. Unacceptable-risk AI (prohibited systems) includes practices such as AI that manipulates individuals, social scoring by governments, or certain types of real-time biometric surveillance. High-risk AI – for example, AI used in employment decisions (like CV-screening tools) or credit scoring – may be deployed only if strict requirements are met, including risk assessments, data quality checks to prevent discrimination, transparency to users, human oversight, and robust performance standards. Less risky AI applications face transparency obligations (e.g. labelling of AI-generated content) but minimal direct regulation. The Act aims to ensure AI is “human-centric and trustworthy”, safeguarding health, safety and fundamental rights while providing legal certainty to businesses across the EU.
Although adopted in 2024, the EU AI Act phases in its rules through 2025–2027. It entered into force on 1 August 2024, will be fully applicable by 2 August 2026, but with key parts taking effect earlier. Notably, from 2 February 2025, the AI Act’s bans on certain AI practices and new “AI literacy” obligations began to apply. These early provisions outlaw specific harmful uses of AI (such as exploitative or manipulative systems and indiscriminate facial recognition scraping) and require organisations deploying AI to educate and train staff and users about AI’s risks and safe use. Further changes hit in mid-2025: from 2 August 2025, rules governing general-purpose AI (including generative AI models) become applicable. Providers of foundational AI models must comply with additional transparency and risk management measures, supported by an EU-facilitated Code of Practice for AI developers. By August 2026 the bulk of the Act will apply to high-risk AI systems, and even embedded AI in products must comply by 2027. This staggered timeline gives businesses and regulators time to adjust. Importantly, the Act has extraterritorial reach: it applies not only to EU-based providers, but also to any AI system put on the EU market or used in the EU. A UK company offering an AI-driven service in Europe will thus have to abide by these requirements, even though the UK is outside the EU.
Data Protection and AI in the UK
Data protection law in the UK already regulates some AI activities, especially where personal data is involved. The principal statute is the Data Protection Act 2018 (DPA 2018), which implements the UK General Data Protection Regulation (UK GDPR). Under UK GDPR Article 22, individuals have the right not to be subject to solely automated decisions with legal or similarly significant effects unless certain conditions are met (such as explicit consent or necessity for a contract, with suitable safeguards). This provision means that if an AI system makes a decision about a person – for instance, an algorithm deciding a loan or a hiring outcome without human input – the person generally has a right to human review of that decision. The EU AI Act will bolster these protections indirectly. For example, the Act imposes data-quality and bias mitigation obligations on “high-risk” AI systems (which include credit scoring and recruitment tools). A UK firm selling an AI-driven credit scoring software in the EU will need to ensure the training data is free from unfair bias and that outcomes do not discriminate. Even for purely domestic UK services, the EU’s push for “high-quality datasets to minimise discriminatory outcomes” is likely to set a de facto standard. UK regulators like the Information Commissioner’s Office (ICO) have echoed similar principles, stressing that AI systems processing personal data must be fair, transparent, and accountable under the DPA 2018.
Recent developments in early 2025 highlight the evolving data protection landscape for AI. The UK’s Data (Use and Access) Bill – a government bill introduced in October 2024 to reform data laws – is in Parliament (as of spring 2025) and includes new provisions on AI. This Bill, currently at Commons report stage and expected to pass by mid-2025, aims to “unlock the secure and effective use of data for the public interest”. Amendments added in the House of Lords would require transparency in AI model training: clauses now mandate that AI developers disclose when they use web scraping (automated data collection) and the sources of training data, and ensure compliance with copyright laws even for data scraped from overseas. These proposed rules (not yet in force) respond to debates over AI systems ingesting personal or copyrighted data without permission. They align with EU trends – the EU AI Act likewise emphasizes data governance and even prompted separate EU discussions on AI and copyright. The ICO has supported greater accountability in AI data usage and in February 2025 welcomed the Bill’s intent, while cautioning that enforcement mechanisms must be clear (the Bill’s final form was still under debate as of April 2025). If enacted, this law will update UK data protection to better address AI’s challenges, while attempting to maintain EU adequacy (the EU’s recognition of UK data laws as essentially equivalent to GDPR). Notably, the EU had granted the UK an adequacy decision until June 2025, so the UK’s data reforms are calibrated not to deviate so much as to jeopardise cross-border data flows.
Another aspect of data protection and privacy is the use of AI by public authorities. AI-driven surveillance and decision-making by government bodies can engage privacy and human rights law. The Human Rights Act 1998 (incorporating the European Convention on Human Rights) and the DPA 2018 were tested in the landmark case R (Bridges) v South Wales Police [2020] EWCA Civ 1058. In that case, the Court of Appeal held that the police’s use of live facial recognition technology infringed privacy rights and data protection requirements, partly due to the lack of clear legal guidelines and safeguards. This precedent underscores that even without AI-specific statutes, existing UK law can constrain intrusive AI uses. The EU AI Act’s ban on real-time biometric identification in public by law enforcement reflects similar concerns for privacy and civil liberties. While the UK is not bound by the AI Act’s ban, UK police would still need to satisfy the stricter standards set by cases like Bridges and any future domestic regulations. Indeed, the UK has joined the Council of Europe’s Convention on AI, Human Rights and Rule of Law (signed in September 2024), an international treaty that, once in force, will commit signatories to uphold human-rights-based governance of AI. This indicates the UK’s recognition that alignment with European norms on AI and privacy is in its interest, even as it forges its own regulatory path.
Employment, Equality and Automated Decision-Making
AI is increasingly used in employment – from hiring algorithms that screen job candidates, to AI tools managing workers in the gig economy. UK employment law and anti-discrimination law, especially the Equality Act 2010, provide a framework for addressing harms that might arise from such AI systems. The Equality Act 2010 (EqA 2010) prohibits discrimination on grounds of protected characteristics (such as race, sex, age, and disability) in employment and other contexts. Crucially, if an employer uses an AI-driven recruitment system that disproportionately filters out candidates of a certain ethnicity or gender, the employer can be liable for indirect discrimination under the EqA 2010 – even if the bias was unintentional and arose from the algorithm’s design or training data. In other words, existing law places responsibility on the deploying company for outcomes of their AI tools. There is no special carve-out for “the computer said it”; automated decisions are ultimately attributable to the human employer. For instance, if an AI scheduling tool allocated fewer shifts to older workers and this could not be justified as a proportionate means of achieving a legitimate aim, it would breach the EqA 2010 just as a biased human manager’s decisions would. UK tribunals have yet to publish high-profile cases on algorithmic bias in hiring, but claims could be brought under established principles of discrimination law. Employers, therefore, must audit and test AI systems for potential bias – a practice encouraged by the Equality and Human Rights Commission and ICO in guidance on AI and fairness.
The EU AI Act reinforces these concerns by classifying AI for employment and worker management as “high-risk”, demanding rigorous oversight. Under the Act, an AI CV-screening software or employee monitoring system in the EU must undergo conformity assessments before use, ensuring it does not have unacceptable bias and that there is human oversight of its decisions. UK businesses providing AI recruitment tools to EU clients will need to meet those standards, effectively raising their practices to EU levels. Even within the UK, larger companies may voluntarily adopt similar auditing to meet investor and public expectations for fairness. Divergence could emerge, however, in the regulatory approach: whereas the EU mandates upfront compliance for high-risk hiring AI, the UK so far prefers to use existing equality and data protection law and sector-specific guidance to tackle bias. In February 2025 the UK government explicitly noted it is not focusing on AI bias or “fairness of content” issues in its newly refocused AI Security Institute (formerly the AI Safety Institute) – that body will concentrate on national security risks of AI (like bio-weapon design or cyber attacks), leaving questions of bias to equality and data regulators. This indicates a lighter-touch approach: no dedicated UK regulator or law yet for algorithmic discrimination, in contrast to the EU’s detailed Act. However, this hands-off stance may evolve under political pressure. A new government in 2025 could take a stronger line – indeed, a Labour Party AI Action Plan announced in January 2025 proposed dedicated “AI growth and ethics oversight” measures, and a private member’s bill (the Artificial Intelligence (Regulation) Bill, reintroduced in March 2025) has suggested creating an AI regulatory authority and requiring AI impact assessments for bias, albeit these ideas are at an early stage. For now, UK employers must navigate AI deployment under general laws, but the EU AI Act serves as a benchmark that may gradually pull UK practice toward more formal accountability in automated employment decisions.
Consumer Protection and Product Liability
AI technologies present novel issues for consumer rights and product safety. In the UK, consumers are protected by a mix of general consumer protection laws and product liability rules, even though none were written specifically with AI in mind. For example, the Consumer Protection from Unfair Trading Regulations 2008 prohibit misleading or aggressive commercial practices – this could apply to an AI-powered shopping website that manipulates consumers (through personalised dark patterns or deceptive chatbot interactions). Similarly, the Consumer Rights Act 2015 implies that products (including digital services) must be of satisfactory quality and fit for purpose. If an AI-powered service fails to meet these standards – for instance, a home AI assistant that consistently malfunctions or a recommendation algorithm that systematically misleads consumers – then contractual remedies may be available to the consumer.
A particularly challenging question is how product liability applies when AI causes harm. Under the UK Consumer Protection Act 1987, producers are strictly liable for damage caused by defective products. If an AI is embedded in a product (say, self-driving car software) and a defect in the AI leads to injury or property damage, the manufacturer can be held liable without the injured person needing to prove negligence. However, if AI is provided as a service or updates dynamically, identifying a “defect” or a responsible producer can be complex. The EU has been grappling with this issue: alongside the AI Act, the European Commission proposed an AI Liability Directive to ease victims’ ability to sue for AI-caused harm, and updates to the Product Liability Directive to clarify that software and AI are within its scope. By early 2025, the EU had put the dedicated AI Liability Directive on hold (intending to withdraw it), deciding instead to rely on existing tort law bolstered by targeted adjustments. The UK, for its part, has not proposed any AI-specific liability legislation to date. Instead, it appears content that tort law (negligence) and product liability can be interpreted to handle AI scenarios. British courts may need to adapt concepts like the “standard of care” for autonomous systems or address whether training data flaws count as product defects. This area is ripe for future divergence: if the EU goes further in making it easier to bring AI-related claims, UK consumers might have weaker recourse unless UK law is similarly updated. Businesses creating AI-driven consumer products in the UK may face higher litigation risk in the EU than at home, potentially pressuring UK lawmakers to keep pace to ensure consumer confidence.
Another facet is how the EU AI Act’s consumer-facing provisions could spill over into UK practice. The Act bans AI that exploits vulnerable people or manipulates users in ways that can cause harm. It also requires that users be informed when they are interacting with an AI (like a chatbot) or when content is AI-generated (notably deepfakes). These transparency measures are designed to preserve consumer autonomy and trust. While not mandated in the UK, major companies might implement them globally for consistency. A UK consumer might thus benefit from AI transparency labels simply because an EU law requires the product to have them. Moreover, the Competition and Markets Authority (CMA) in the UK has a consumer protection remit and has begun scrutinising digital manipulative practices. In 2023, the CMA opened research into algorithmic pricing and personalised advertising, concerned that AI could enable unfair discrimination between consumers or tacit collusion on prices. If the CMA finds consumer harm, it could use existing powers under the Competition Act 1998 or consumer law to intervene, even absent an AI-specific statute. The UK’s Digital Markets, Competition and Consumers Bill (currently in Parliament in 2025) will strengthen enforcers’ ability to tackle unfair practices in digital markets – indirectly covering some AI-driven conduct – and create a “Digital Markets Unit” to oversee powerful tech firms. All these moves indicate the UK is upgrading its consumer and market regulations in parallel with the EU’s initiatives, though through its own legislative vehicles rather than an AI Act clone.
Competition Law and AI
AI also has implications for competition (antitrust) law. One concern is that algorithms could facilitate anti-competitive behaviour – for example, pricing algorithms used by firms might learn to coordinate prices, leading to cartel-like outcomes without an explicit agreement. Both EU and UK competition laws (Article 101 TFEU in the EU, and Chapter I of the Competition Act 1998 in the UK) already prohibit agreements or concerted practices that restrict competition. If companies intentionally use AI tools to collude, that would clearly fall foul of these laws. The harder scenario is “tacit algorithmic collusion”, where self-learning algorithms simply adapt to each other’s pricing strategies and arrive at a stable high-price equilibrium without human orchestration. Traditional law struggles with this because there is no proven “agreement” or communication. The EU AI Act does not directly address competition concerns, as it is primarily about AI safety and fundamental rights, but it indirectly contributes to a fair marketplace by imposing transparency in high-risk AI and requiring logs and oversight that could deter secret collusion. The UK, outside that regime, is examining these issues through its competition regulators.
The Competition and Markets Authority (CMA) launched an AI Foundation Models review in 2023 to study how the development and deployment of large AI models (like GPT-type models) might affect competition and consumers in the UK. In an update in early 2024, the CMA indicated potential benefits from AI (innovation, new services) but also flagged risks, such as a few big companies controlling key AI infrastructure or using AI to entrench market power. By early 2025, the CMA’s AI Steering Committee reported on “foundation models, competition and consumer protection”whitecase.com, signaling continued scrutiny. Notably, one of the five principles in the UK’s AI White Paper is “fairness”, interpreted in part as ensuring AI does not undermine fair competition or create unjust outcomes for consumers. This principle is non-binding as yet, but regulators like the CMA are expected to incorporate it into their approach. In practice, the CMA can already take action if, say, two retailers rely on the same pricing algorithm that is configured to avoid price undercutting (amounting to a hub-and-spoke cartel via an AI supplier). There is precedent for the CMA penalising algorithm-facilitated collusion (e.g. in the US, a 2015 case saw an e-commerce company fined for algorithmic price-fixing of posters), and UK law would similarly treat AI as just another tool used to break the law.
Another angle is market dominance: Large AI models require vast data and computing resources, which only a few tech giants possess. The EU’s Digital Markets Act (DMA), which the UK is not part of, will impose obligations on such “gatekeepers” to prevent abuse – including transparency in ranking algorithms and data-sharing duties. The UK’s forthcoming digital competition regime (via the Digital Markets, Competition and Consumers Bill) is convergent in spirit, targeting firms with “Strategic Market Status” and imposing conduct requirements to curb anti-competitive practices. Thus, while the EU AI Act itself is not competition law, it exists in a broader EU strategy that the UK is partly mirroring through separate tools. Divergence may lie in execution: The EU’s regulatory web (AI Act, DMA, data sharing rules) is more prescriptive, whereas the UK thus far leans on case-by-case enforcement and principle-based guidance. However, the gap may narrow as the UK implements its new competition reforms. In sum, UK competition law can address many AI issues already, but the EU’s moves put pressure on the UK to remain aligned in outcomes so that AI firms operating across both jurisdictions face similar expectations against collusion and abuse of dominance.
The UK’s Pro-Innovation Approach vs EU’s Prescriptive Approach
At the core of the UK-EU dynamic on AI governance is a philosophical difference: principles-based, sector-led regulation in the UK versus comprehensive, centralised rule-making in the EU. In March 2023 the UK Government published its White Paper on AI Regulation, subtitled a “pro-innovation approach”insideprivacy.com. Instead of an overarching AI Act, the UK opted for five guiding principles – safety, transparency, fairness, accountability, and contestability – to be applied by existing regulators within their domains. For example, the Medicines regulator would oversee AI in medical devices under these principles, the ICO would do so for data-driven AI, and so forth. Initially, this framework is non-statutory: regulators are encouraged to consider the principles in their guidance and enforcement. The government has indicated it may introduce a statutory duty on regulators to have due regard to these AI principles in the future, but as of 2025 no such law has been enacted. This light-touch approach aims to allow AI innovation to flourish without the compliance burdens that a broad law like the EU AI Act might impose. UK ministers have been explicit about avoiding heavy-handed rules that could stifle tech investment, stressing “innovation-friendly” regulation and claiming that overly strict regimes (implying the EU’s) might stymie competitiveness.
By 2025, however, the UK’s stance is gradually shifting in response to geopolitical and domestic developments. The government has begun to recognize that some binding measures may be needed to address highest-risk AI. In late 2024, after high-profile global discussions on AI safety, the UK secured voluntary commitments from leading AI firms (similar to those in the US) to manage risks in advanced models. The Prime Minister’s Office indicated in early 2025 an intention to “make such AI safety commitments legally enforceable” – suggesting future legislation might mandate certain safety standards for foundation models. Indeed, the UK government announced plans to introduce legislation in 2025 specifically targeting AI risks, including granting the new AI Safety Institute greater independence and enforcement powers. It is telling that this came on the heels of the February 2025 milestone of the EU AI Act. UK businesses, especially those in tech, have expressed concerns about having to navigate divergent regimes: one industry survey in April 2025 found a majority of UK firms using AI felt they would eventually need to comply with EU rules regardless, due to market demands. Thus, the pressure for regulatory harmonisation is mounting. On the other hand, the UK also sees an opportunity to compete by regulating smarter – officials have pointed to the Draghi report in the EU which warned that over-regulation could hamper innovation, implying the UK can attract AI development if it avoids the perceived pitfalls of the EU’s approach.
The UK government’s February 2023 AI White Paper remains the cornerstone of its approach, and by early 2025 regulators had responded with their sectoral AI plans as requested. For example, the Financial Conduct Authority (FCA) in 2024 outlined how it will use sandboxes and existing financial regulations to oversee AI in finance (focusing on things like AI in credit assessments, which incidentally is classified as high-risk by the EU). The ICO similarly published its priorities on AI and data protection, supporting the White Paper’s principles and highlighting issues like foundation models and biometrics within its remit. This coordinated effort suggests that the UK’s decentralized method can address many concerns if done diligently. However, a potential drawback is consistency: without a single AI Act, gaps or overlaps might occur, and businesses could face a patchwork of guidance from different regulators. To mitigate this, the idea of an “AI Authority” has been floated (notably in the private member’s AI Bill) to coordinate across regulators. Even if that Bill does not become law, the government might implement parts of it administratively. For instance, it set up a central AI Regulation Function within the Department for Science, Innovation and Technology (DSIT) to help steer regulators – effectively a light version of an AI authority.
Impact on UK Businesses and Legal Harmonisation
The EU AI Act will significantly impact UK businesses, even though it is EU law. Any UK-based AI provider or developer that markets products or services in Europe must ensure those AI systems conform to the Act. This might mean UK companies need to perform risk assessments, keep technical documentation, register their high-risk AI systems in an EU database, and meet transparency requirements for users in the EU. For some, this will involve appointing EU representatives or collaborating with EU notified bodies to certify compliance – analogous to what happened with GDPR, where UK firms had to hire Data Protection Representatives for EU data handling. The compliance burden could be substantial, particularly for smaller AI startups. For example, a UK startup offering AI-driven medical diagnostic software (an inherently high-risk use) would have to invest in compliance to sell in the EU: ensuring its software undergoes conformity assessment under the AI Act, possibly adjusting its algorithms to meet the EU’s accuracy and robustness criteria, and providing extensive technical documentation and transparency information by law. This raises costs, and if the UK does not impose similar requirements domestically, the company faces a double challenge – one regime abroad, another at home. Some UK businesses might choose to apply the EU’s standards across the board for simplicity (especially if they anticipate the UK eventually aligning), effectively voluntarily harmonising with the EU AI Act. Others might geofence their products, offering a less-regulated version in the UK and a compliant version in the EU, though this is difficult to maintain in practice and could raise ethical questions.
From a legal harmonisation perspective, the existence of the EU AI Act puts the UK at a crossroads. One option is to align closely: either through a treaty or simply by enacting parallel rules. This would ease cross-border operations and signal to the world that UK AI products meet the stringent EU benchmarks, potentially an advantage in trust and safety. The counter-option is deliberate divergence: the UK could position itself as a jurisdiction with more flexibility for AI development, arguing that its approach still protects rights but with less red tape. There is precedent in data protection – the UK initially retained the GDPR in full, but is now tweaking it via the Data (Use and Access) Bill to be more business-friendly while hoping to stay adequate in the EU’s eyes. A similar “UK AI Act” is not yet on the table (the private member’s AI Bill is more a discussion piece than government policy), but bits of it may materialize in other legislation (for example, requiring AI impact assessments or accountability measures could appear in future UK digital regulation).
As of August 2025, the UK will have to reckon with the reality of the EU AI Act’s general-purpose AI provisions kicking in and an ever-approaching full application date in 2026. We can expect increased dialogue between UK and EU regulators. The UK has observer status in some EU coordination fora and has signaled it wants cooperation on AI governance. If the UK diverges too much – for instance, if it were perceived as a haven for AI applications banned in the EU (like social scoring systems or certain surveillance tech) – that could create political friction and possibly obstacles for UK-EU data and tech partnerships. On the other hand, if the UK’s flexible approach yields robust innovation without evident harm, it might influence EU thinking or at least provide a useful comparison. Given the global nature of AI, there is also a drive for international consistency: the G7 has discussed baseline standards (the “Hiroshima AI process”), and the OECD continues to promote its AI Principles. The UK’s involvement in the global AI Safety Summit (held in the UK in November 2023) and follow-up international initiatives shows that it seeks leadership in shaping norms that are not far removed from the EU’s values, even if the regulatory mechanisms differ.
Conclusion
The EU AI Act represents a significant regulatory advancement with ripple effects far beyond the EU’s borders. For the UK, a nation with a strong tech sector and deep ties to European markets, the Act serves as both a catalyst and a challenge. In key areas of law – data protection, employment equality, consumer protection, and competition – the principles and obligations of the EU AI Act are prompting the UK to evaluate its own frameworks. The UK’s current approach relies on adapting existing laws (like the Data Protection Act 2018 and Equality Act 2010) and empowering sectoral regulators, under broad principles, rather than enacting an AI-specific statute. This pro-innovation stance seeks to avoid over-regulation, yet the steady implementation of the EU AI Act through 2025 and 2026 is increasing pressure on the UK to converge at least on outcomes if not on form. Recent UK developments – from the Data (Use and Access) Bill’s AI provisions and new codes of practice, to consultations and an action plan under a prospective new government – indicate a trajectory of gradually strengthening AI governance while trying to preserve flexibility.
Ultimately, the impact of the EU AI Act in the UK will be measured by how far it drives changes in business behavior and legal standards. UK businesses are already gearing up for EU compliance, which may effectively raise the bar for their UK operations too. Regulators in Britain are learning from their EU counterparts’ experiences in enforcing the AI Act’s rules. There is potential for regulatory divergence, but complete decoupling is unlikely given the interconnectedness of markets and the shared challenges AI poses. Instead, a picture is emerging of the UK and EU moving on parallel tracks – the EU with detailed prescriptive rules, the UK with principle-based guidance – that may gradually draw closer together. The coming years (and the legislative developments through and beyond 2025) will clarify whether the UK will firmly align with the EU’s AI regime or carve out a distinctly different, yet hopefully compatible, path. For now, anyone deploying AI in the UK must navigate a complex interplay of domestic law and the indirect pull of Europe’s AI Act – a task requiring careful compliance strategy and an eye on future regulatory convergence.
References
- Artificial Intelligence Act, Regulation (EU) 2024/1689 (EU AI Act). Entered into force 1 August 2024, application from 2025-2027.
- Data Protection Act 2018 (UK).
- UK General Data Protection Regulation (UK GDPR), Art. 22 on automated decisions.
- Equality Act 2010 (UK).
- R (Bridges) v South Wales Police [2020] EWCA Civ 1058 (Court of Appeal).
- Osborne Clarke (February 2025). Artificial Intelligence – UK Regulatory Outlook February 2025 osborneclarke.com
- Osborne Clarke (March 2025). Artificial Intelligence – UK Regulatory Outlook March 2025 osborneclarke.com.
- White & Case (2024). AI Watch: Global regulatory tracker – United Kingdom whitecase.com
- IAPP – Bracy, J. (12 Mar 2025). “UK minister says Data Use and Access Bill in final stages” iapp.org.
- UK Government (March 2023). AI Regulation: a pro-innovation approach (White Paper) gov.uk.
- UK Government (2023). Establishing a pro-innovation approach to regulating AI – Policy Paper.
- Department for Science, Innovation and Technology (Feb 2023). UK Government’s response to AI regulation consultation.
- Competition and Markets Authority (Sept 2023). Report on Foundation Models and AI.
- European Commission (Digital Strategy) (Apr 2025). “AI Act – Shaping Europe’s digital future” digital-strategy.ec.europa.eu.
- European Commission (Feb 2025). “First guidelines on AI Act – definition of AI system and prohibited AI” osborneclarke.com.
- European Commission (2022). Proposal for an AI Liability Directive (COM(2022) 496) – withdrawn from 2025 work programme osborneclarke.com.
- Council of Europe (2024). Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (not yet in force, UK signatory).
- Competition and Markets Authority (2023-25). Digital Markets, Competition and Consumers Bill (UK, in Parliament 2025).
- Data (Use and Access) Bill [HL] 2024-25 (UK, in Parliament) osborneclarke.com.
- Crime and Policing Bill 2024-25 (UK, in Parliament) – includes offence for AI-generated child abuse material osborneclarke.com.
- KPMG (April 2025). “Evolving plans for AI regulation – reconciling frameworks with competitiveness” kpmg.com .