• news-banner

    Expert Insights

UK Government AI Regulation Response & Roadmap – Is the Government behind the wheel?

On 6 February 2024 RT Hon Michelle Donelan, Secretary of State for Science, Innovation and Technology (DSIT) published the government’s response to its original consultation on AI from March 2023. The government believes that its overall approach, which is stated to be strongly pro-innovation, pro-safety, agile sector-based, common sense and pragmatic, has generally been well-received.  

In considering her response to the consultation, Mrs Donelan states “I have sought to double-down on this success and drive forward our plans to make Britain the safest and most innovative place to develop and deploy AI in the world”.  

Consistent with the EU’s view, the government is committed that the future of AI is safe AI.  

However, the government has stated that, at present, it is continuing with a planned non-statutory basis of regulation, relying on sector-based regulation with a centrally coordinated regulator. The government still believes that a non-statutory approach “currently offers critical adaptability – especially while we are still establishing our approach – but we will keep this under review” (paragraph 16). The government states its decision will be formed in part by review or the plans published by the regulators, the review of the regulator powers and by adopting targeted binding measures where necessary. The paper acknowledges (paragraph 76) “the challenges posed by AI will ultimately require a legislative action in every country once understanding of risk has matured. Introducing binding measures too soon, even if highly targeted could fail to effectively address risks, quickly becoming out of date or stifle innovation and prevent people from across the UK from benefitting from AI in line with the adaptable approach set out in the AI regulation white paper, the government would consider introducing binding measures if it determined that existing mitigations were no longer adequate and it had identified interventions that would mitigate risks in a targeted way”. The government at this point would balance innovation and competition against the benefits of regulation.

What does the paper do?

There is less of a clear theme to the paper than the original consultation. In essence, the government sets out its proposals for a regulatory framework to “keep the pace with a rapidly advancing technology”. As part of this it sets out approaches to the following:

  1. delivering a proportionate context based approach to regulate the use of AI;
  2.  examining the case for new responsibilities for developers of highly capable general purpose AI systems;
  3. international collaboration on AI governance; and
  4. setting out a roadmap of next steps.

The government’s approach to AI regulation.  

The government believes that its five cross-sectoral principles for regulators to interpret and apply remain sound. These principles were:

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

The paper reports on the progress made by regulators with responsibility for AI including the Competition and Markets Authority (CMA) and the Information Commissioners Office (ICO) while OFGEM for gas and electricity and the Civil Aviation Authority (CAA) are both working on AI strategies to be published later this year.

In order to achieve consistency, the government has asked for an update for publication from each regulator, outlining its strategic approach to AI, by 30 April 2024. This would entail:

  • an outline of the steps they are taking in line with the expectations of DSIT set out in the white paper;
  • an analysis of AI related risks in the sectors and activities the relevant regulator regulates and the actions they are taking to address these;
  • an explanation of the regulator’s current capability to address AI as compared with their assessment of the requirements and the actions taken to put the right structures and skills in place;
  • a forward looking plan of activities over the following 12 months.

The original white paper contemplated a central function within government to oversee the individual sector regulators. This is now being established with recruitment underway for the central function. The government will also launch in 2024 a targeted consultation on a cross-economy AI risk register to capture the range of risks across the economy. The intention is to “provide a single source of truth on AI risks which regulators, government departments and external groups can use”. The intention is also to support identification of risks that fall across or in between the remits of regulators to identify gaps and prioritise further action.

In addition, the government is considering the “added value” of developing a risk management framework similar to the US NIST standards which are becoming widely adopted. Also underway is gap analysis with regulators to establish and review potential gaps existing or regulatory powers and remits. 

In order to drive coordinated action across government, DSIT is also establishing lead AI ministers across all departments to bring together the work on risks and opportunity driven by AI in the relevant sectors, and to oversee the implementation of the necessary frameworks and guidance for public sector use of AI. The AI governance is set out in a diagram in section 5.1.2 of the response which shows the different bodies responsible for AI within government and the relationships with regulators and industry.

[Extract from Consultation outcome, A pro-innovation approach to AI regulation; government response 6 February 2024]

There are also a number of initiatives underway to address possible societal harms from AI including updated guidance on responsible use of AI in employment situations.

Creative industries and media organisations have concerns regarding copyright protection in the era of genAI. A recent review  by the UK Intellectual Property Office and a working group made up of rights holders and developers was not able to agree a voluntary code on IP protection, so the government is now looking at a new way forward to balance the interests of rights holders and AI developers, includingon an international basis. We await further proposals on this approach.

Regulators and public sector bodies are all instructed to address AI related bias and discrimination within their domains. In particular, the ICO has updated guidance on fairness.

Reform of data protection law

The UK’s Data Protection and Digital Information (No.2) Bill (DPDI) is currently progressing through Parliament. The government is taking this opportunity to review in particular the current rules on automated decision making within the DPDI, as it acknowledges that the current rules are confusing and complex, undermining confidence to develop and use innovative technologies. The report signals that the DPDI bill will expand the lawful basis on which solely automated decision making can take place in order to assist responsible use of automated decision making technologies whilst balancing the interests of data subjects who will have the right to obtain information on decision making, and will also have the opportunity to make representations and to request human intervention or contest automated decisions.  

There is a significant focus also to avoid false or misleading information generated by AI including watermarking and output databases and there will be a call for evidence on AI related risks in relation to trust to develop this “fast moving and nascent area of technological development”.

AI security

The misuse of AI technology is clearly of major concern. In order to address potential criminal use of AI, the government will examine the extent to which existing criminal law provides appropriate redress for AI related offending and harmful behaviour.

In order to help organisations to develop and use AI securely, the National Cyber Security Centre (NCSC) published guidelines for a secure AI system development in November 2023.  The government is looking to build on this by releasing a call for views to obtain further information on the next steps including a potential code of practice for cyber security of AI based on the NCSC’s guidelines.  It is acknowledged that international collaboration in this area will be essential.

Examining the case for new responsibilities for developers of highly capable general purpose AI systems.  

As noted above, the government has recognised that legislation may be necessary for certain types of AI. The paper distinguishes between the following types of AI models: 

  • highly capable general purpose AI – these are generally foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models;
  • highly capable narrow AI – these are foundation models that can perform a specific set of narrow tasks usually within a specific field such as biology with capabilities that match or exceed those present in today’s most advanced models. Generally speaking, these models will demonstrate super-human abilities on narrow tasks or domains;
  • agentic AI or AI agents – this is a sub-set of AI technologies that can competently complete tasks over long timeframes.

The focus in the current response is on the highly capable general-purpose AI systems. It sets out some of the key questions that regulators will have to grapple with when deciding how best to manage the risks of highly capable general-purpose AI systems, including allocation of liability across the supply chain and negotiating the open release of the most powerful systems. It is here that the most natural overlap may come with the EU AI Act and high-risk AI models within the regulation. This will be an important part of future regulation. There is a concern that highly capable AI systems can present substantial risks across multiple sectors or applications, which could result in multiple harms across the whole of the economy. There is a strong focus on voluntary measures to build an effective regulatory approach in the first instance, including further development of the principles and processes outlined at the EU AI Safety Summit at Bletchley Park in November 2023.

Alongside the voluntary measures at the AI Safety Summit, governments and AI companies agreed that both had a crucial role to play in testing the next generation of AI models, both before and after deployment. In the UK, DSIT expects that the newly established AI Safety Institute will lead this work. Its core functions will be to develop and conduct evaluations on advanced AI systems, to understand them in more detail and understand safety-relevant capabilities, to drive foundational AI safety research and to facilitate information exchange. The Institute itself will not act as a quality certification designating specific systems as safe or unsafe but it will develop technical expertise to understand the capabilities and risks of the systems to improve government risk evaluation and safety features. Whilst there is some expectation of legislation here, it is too early to predict which systems will lead to significant risk in practice. As such, the government intends future regulations to be targeted at a small number of developers of the most powerful general-purpose systems. It will do so by establishing some form of dynamic thresholds to respond to the advances in AI development, which would relate to forecasts of two “proxies” for evaluation, being:

  1. Compute: the amount of compute required to train a model; and
  2. Capability Benchmarking: setting the capabilities in certain risk areas that the system may operate in, to identify whether high capability of the system could result in high risk.

Further thresholds could be considered as necessary.

Working with international partners to promote effective collaboration, all countries are attempting to take an international approach on AI and therefore the section in relation to collaboration is quite detailed, listing a number of organisations who are already responsible for AI. Interestingly, the paper does not refer at all to the EU AI Act or any attempt to provide any form of consistency with this legislation. Measures are more based on partnership, information exchange and working in cooperation with established international bodies.

What next? The government’s roadmap

The government summarises 25 initiatives based under 5 headings to progress on AI regulation. These include continuing the develop domestic policy positions on AI through wide-ranging consultation and engagement and collaboration, and particularly working closely with the AI Safety Institute. AI risk and opportunity will also be reviewed in more detail, including the further code of practice for cyber-security of AI based on the NCSC’s guidelines and further mechanisms for government dialogue. A priority is also to build out the central function and support regulators with key supervening governance and transparency of objectives.

Conclusion

Overall, there are a number of detailed actions that are proposed, together with a significant amount of announcements of financial support to develop the AI ecosystem. The government’s approach is consistent with the original white paper but it does acknowledge that legislation will become inevitable in time. As such, the key requirements of the original white paper, its five key principles and the questions asked to regulators will remain highly relevant as the government seeks to develop the environment for promoting the best of AI and seeking to restrict harms that could arise. The approach remains very different from that of the EU AI Act, which has attracted some criticism for its complexity, but both are aligned in terms of the emphasis on safe AI (in the caseof the UK) and safe, respecting fundamental rights and democracy (in the case of the EU AI Act) notwithstanding that the title of the UK response is firmly focussed on the pro-innovation approach.

Our thinking

  • Seminar: National Association of Independent Administrators

    Events

  • In-House Insights: Building and Contributing to high performing In-House Legal Teams

    Megan Paul

    Events

  • Julia Cox, Harriet Betteridge and Alexandra Clarke write for Tax Journal on who might be considered the ‘winners’ and ‘losers’ from an IHT perspective following the UK Autumn Budget

    Julia Cox

    In the Press

  • City AM quotes Charlotte Duly on the long-awaited SkyKick v Sky Supreme Court decision

    Charlotte Duly

    In the Press

  • Charlotte Duly writes for World Intellectual Property Review on the Bluebird trademark dispute

    Charlotte Duly

    In the Press

  • Law.com International interviews Robert Reymond on the growth of our Latin America desk

    Robert Reymond

    In the Press

  • Autumn Budget 2024 – Charities – points you might have missed

    Liz Gifford

    Insights

  • Internationally competitive? The post-April 2025 tax rules for non-doms

    Dominic Lawrance

    Insights

  • Global Investigations Review quotes Rhys Novak on the UK government’s new guidance on complying with its forthcoming failure to prevent fraud offence

    Rhys Novak

    In the Press

  • Under my umbr-ETA, ESTA, eh eh… FAO: international visitors to UK from 8 January 2025 – avoid rain and flight anxiety

    Paul McCarthy

    Quick Reads

  • National Infrastructure Commission’s Report on Cost Drivers of Major Infrastructure Projects in the UK

    Charlotte Marsh

    Insights

  • Global Legal Post quotes James Walton on the CJC's interim report into litigation funding

    James Walton

    In the Press

  • Family Court Reporting Week - supporting journalists to report family court cases

    Dhara Shah

    Quick Reads

  • Passing on family wealth – the Family Law impact of the new inheritance tax changes

    Sarah Jane Boon

    Insights

  • Potential parental disputes about school fees now VAT is to be added

    Sarah Jane Boon

    Insights

  • The new guidance on the offence of failing to prevent fraud – will it lead to a sea-change to anti-fraud compliance mechanisms?

    Rhys Novak

    Quick Reads

  • What constitutes “possession” and its importance (and relevance) for correctly calculating your SDLT liability

    Pippa Clifford

    Insights

  • Building Safety for Higher Risk Buildings – How is the Regulatory Regime bedding in?

    Kate Knox

    Insights

  • Navigating the Digital Services Act and Online Safety Act: A Quick Guide for Digital Platform Providers on the need to police content

    Dillon Ravikumar

    Quick Reads

  • Retail Collection – Episode 1: URBN

    Ilona Bateson

    Podcasts

Back to top