As we continue to see the rapid development of digital technologies, such as artificial intelligence (“AI”) tools, legislators around the world are contemplating how best to regulate these technologies.  In the UK, the Government has adopted a “pro-innovation” agenda, with the aim of making the UK “an attractive destination for R&D projects, manufacturing and investment, and ensuring [the UK] can realise the economic and social benefits of new technologies as quickly as possible.”[1] 

In 2022, the Government Chief Scientific Adviser and National Technology Adviser, Sir Patrick Vallance, was tasked with leading a review on regulation for digital technologies (the ‘Pro-innovation Regulation of Technologies Review’ project).  The first report of this project was published on 15 March 2023 (the “Vallance Report”), and the UK Government released its response (the “Government’s Response”) shortly after.  In the same month, the UK Department for Science, Innovation and Technology published a white paper setting out its proposed approach for regulating AI (the “White Paper”).

The Vallance Report and the Government’s Response

The Vallance Report champions a pro-innovation, “bold” approach to regulating emerging technologies.  It proposes a “three-stage approach [that] should underpin [the UK’s] regulatory approach for innovation”, namely:

  1. regulatory flexibility and divergence at an early stage for emerging technologies;
  2. promoting and learning from experimentation to support the scaling of key technologies e.g., through regulatory sandboxes and testbeds; and
  3. seeking international regulatory harmonisation with respect to established technologies, ensuring market access for UK companies. 

Whilst covering a variety of different new technologies (such as drones, cyber security, and space and satellite technologies), the Vallance Report’s headline recommendations relate to AI, data and automated transport.  In the table below, we outline some of the recommendations in the Vallance Report with regards to AI, data and automated transport, and the Government’s response to the same.

Recommendation Government’s Response
Government should work with regulators to develop a multi-regulator sandbox for AI to be in operation within the next six months.

Such sandbox, which could be supervised by the Digital Regulatory Cooperation Forum (comprised of the UK’s Office of Communications (“Ofcom”), the Information Commissioner’s Office (“ICO”), the Competition and Markets Authority (“CMA”) and the Financial Conduct Authority (“FCA”)), would allow innovators and entrepreneurs to experiment with new products or services without the risk of fines or liability, but with enhanced regulatory supervision.  It is envisaged that this approach would ensure a coherent regulatory response with respect to emerging technologies, bringing together multiple regulators to oversee the final-stage development and deployment of AI technologies.
“The Government will engage regulators, including the Digital Regulatory Cooperation Forum, immediately, to prepare for the launch of a new sandbox based on the features and principles set out in the Vallance Review.”
Government should announce a clear policy position on the relationship between intellectual property law and generative AI to provide confidence to innovators and investors. 

In particular, the Vallance Report recommends that the Government work with AI and creative industries to develop ways to enable text and data mining for any purpose,[2] and in parallel, ensure that there are technological solutions for attribution and recognition (such as watermarking). 
The Intellectual Property Office (“IPO”) “will produce a code of practice by the summer which will provide guidance to support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work. … An AI firm which commits to the code of practice can expect to be able to have a reasonable licence offered by a rights holder in return. … However, this may be followed up with legislation, if the code of practice is not adopted or agreement [with respect to the same] is not reached.”
Facilitate greater industry access to public data, and prioritise wider data sharing and linkage across the public sector, to help deliver the Government’s public services transformation programme. 

The Vallance Report recommends that the Government consider the use of privacy enhancing technologies or data intermediaries to provide efficient, lower risk options for data exchange.
“[The] Government [is] committed to delivering a Data Marketplace by 2025, to provide a single front door for Government users to discover public sector data… [and] will explore how this could be expanded… to make public sector data accessible to industry and other external groups, including the legislative arrangements relating to open public data and aspects such as licensing-type agreements, so as to maximise public and economic value.”
ICO should update its guidance for processing activities relating to AI as a service (AIaaS). 

This should include: (i) clarification on when an organisation is a controller, joint controller or processor for processing activities relating to AIaaS; and (ii) guidance on when providers can reuse personal information for improving their models.  The Vallance Report noted that “the current regime is burdensome for consumers and creates disincentives to providing data.”
As the ICO is an independent regulator with responsibility for this area, the Government specifically noted that the ICO “is best placed to determine how this recommendation should be taken forwards”.  
Government should bring forward the Future of Transport Bill to unlock innovation across automated transport applications.[3]“The Government is committed to bringing forward this legislation when parliamentary time allows.”

The White Paper[4]

Following the release of the Vallance Report and the Government’s Response, the UK Department for Science, Innovation and Technology (“DSIT”) published its White Paper.  As noted in the White Paper, the Government intends “to leverage and build on existing regimes, while intervening in a proportionate way to address regulatory uncertainty and gaps”, by providing a “principles-based framework for [existing] regulators to interpret and apply to AI within their remits.”  The 5 principles are:

  1. Safety, security and robustness: AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed.
  2. Appropriate transparency and explainability: AI developers should be able to provide sufficient information about their AI system and enable the relevant parties to access, interpret and understand the decision-making processes of an AI system.
  3. Fairness: the use and outcomes of AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes.
  4. Accountability and governance: governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.
  5. Contestability and redress: where appropriate, users should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

Unlike the approach taken in the EU, with its proposed Artificial Intelligence Act, the White Paper did not propose introducing AI-specific formal regulation or setting up a dedicated regulatory body for AI.  Instead, the White Paper noted that “creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.”  In addition, the White Paper noted that the best way to achieve the context-specific approach under the principles-based framework is to “empower existing UK regulators to apply the cross-cutting principles… [as] regulators are best placed to conduct detailed risk analysis and enforcement activities within their area of expertise.” 

This principles-based framework appears to be in line with the recommendations in the Vallance Report that “the Government should avoid regulating emerging digital technologies too early, to avoid the risk of stifling innovation.”  The White Paper goes further to clarify that during the initial period of implementation, “the principles will be issued on a non-statutory basis and implemented by existing regulators”, as “new rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce [the UK’s] ability to respond quickly and in a proportionate way to future technological advances.” 

ICO’s Response to the White Paper

The ICO appears to be the first UK regulator to have published a public response to the White Paper.  This may not come as a surprise given that personal data may be processed at all stages of the AI life cycle, including in the design, training, testing and deployment of the AI system.  In fact, the ICO noted that, as the data protection authority in the UK, it “plays a central role in the governance of AI”. 

Whilst the ICO expressed broad support for the principles-based framework laid out in the White Paper, it noted several issues which may need to be clarified by or discussed with the Government in more detail in order to effectively implement the framework envisaged in the White Paper:

  1. The role of regulators: the ICO noted that “it is the regulators themselves that must produce guidance and advice, in alignment with the laws that they oversee independently of Government.” As such, to ensure that businesses have sufficient clarity and certainty with regards to the AI regulatory landscape, the ICO “would welcome clarification on the respective roles of Government and regulators in the issuing of guidance and advice”.

    In addition, the ICO requested clarity over what is required of regulators where the White Paper noted that “regulators will be expected to clarify existing routes to contestability and redress, and implement proportionate measures to ensure that the outcomes of AI use are contestable where appropriate”, as “typically, it is organisations using AI and that have oversight over their own systems that are expected to clarify routes to, and implement, contestability.”  The ICO queried if instead the role of regulators here may be better described as “making people more aware of their rights in the context of AI.”
  2. Compatibility of AI and data protection principles: the White Paper proposes certain principles for the regulation of AI, and the ICO noted that “these principles map closely to those found in the UK data protection framework”.  As such, the ICO “would welcome close collaboration with the Government to ensure that the [White Paper] principles are interpreted in a way that is compatible with the data protection principles”. 

    For example, the ICO suggested (i) amending the “fairness” principle in the White Paper (which is similar to data protection’s fairness principle) such that it also covers the stages of developing an AI system, as well as its use (as currently proposed under the White Paper); and (ii) clarifying that where personal data is used for automated decision-making which produces legal or similar effects[5], it will be a requirement for AI system operators to be able to provide a justification for the decision (as compared to the position under the White Paper which provides that regulators need only consider requiring AI system operators to provide an appropriate justification for the decision). 
  3. Design of the proposed sandbox: the ICO suggested that the Government work closely with the Digital Regulatory Cooperation Forum, and to broaden the scope of any sandbox to include all digital innovation (and not just in relation to AI), “as innovators’ queries are unlikely to be strictly limited to AI and extend to a much broader family of digital technical that are overseen by the same regulators.”
  4. Cost implications of the proposals: the ICO noted that there would likely be “additional costs to cross-economy regulators such as the ICO, which will now need to produce products tailored to different sectoral contexts in coordination with other relevant AI regulators.” As such, the ICO noted that they may require additional funding from the Government for these proposals to succeed. 
  5. Format of proposed guidance: the ICO recommended the Government “prioritises research into the type of guidance a wide range of AI developers would value” to allow regulators to produce effective and user-friendly joint guidance for businesses.

What’s to come?

As evident from the Vallance Report, and the Government’s Response and subsequent White Paper, the UK Government is committed to developing a “pro-innovation” regulatory landscape when it comes to AI and there appears to be increasing divergence in approach between the UK and the EU as regards AI-specific formal regulation. There is likely to be another flurry of activity in this area over the coming months: the IPO is expected to produce a code of practice this summer with respect to accessing copyright work as input data to train AI models, and the public consultation on the White Paper will end on 21 June 2023. It also remains to be seen whether the ICO will implement the recommendations in the Vallance Report to update its guidance for processing activities relating to AIaaS.  One thing is for sure – the legal and regulatory framework in the UK with respect to AI is rapidly evolving, so watch this space.

[1] See,

[2] A previous proposal to introduce a new copyright and database right exception which allows text and data mining for any purpose (including commercial exploitation) was formally abandoned by the UK Government in early 2023. 

[3] For additional information on the Future of Transport Bill and the joint report on automated vehicles published by the Law Commissions of England and Wales and Scotland, please see our article “Automated Vehicles: Driving the Future of Transport?” (available here).

[4] See also the summary of the White Paper on our Cleary Antitrust Watch, available here.

[5] See, Article 22 of the UK General Data Protection Regulation.