Data Protection and Privacy: Continuing Trends and Developments
The UK’s data protection law and regulatory priorities continue to evolve, reflecting its position outside the EU, advancements in technology and the rise of AI. This article explores regulatory priorities in the UK and how technology is impacting on those priorities. It also examines how and why the UK is changing its data protection laws.
Regulatory enforcement and collaboration
The Information Commissioner’s Office’s (ICO) approach to enforcement continues to receive significant scrutiny. Critics, such as the Open Rights Group, argue that the ICO is not fulfilling its role effectively, citing a lack of significant action against major tech companies and a slow pace in handling complaints. The ICO has continued its policy of not issuing monetary penalties against public bodies (except in the most serious cases), and during 2024 most fines that were issued concerned spam messages and calls where the maximum fines available remain at GBP500,000.
However, in August 2024, the ICO published a provisional decision to impose a GBP6 million fine, which drew significant attention. This was against a software provider following a ransomware attack that disrupted NHS and social care services. As the ICO has generally prioritised enforcement against data controllers, rather than data processors, this case demonstrates that processors are not at all immune from regulatory scrutiny and that they will also be held accountable if they do not comply with the UK General Data Protection Regulation (GDPR).
In March 2024, the ICO published its updated Data Protection Fining Guidance. It provides a detailed framework that will be applied when the ICO is determining levels of fines. It also examines certain technical aspects, such as interpreting the concept of “undertaking”, which is a term referred to in the penalty regime of the UK GDPR. The guidance has generally been welcomed as providing transparency for organisations, and is particularly helpful for organisations in better understanding risk exposure.
The ICO continues to work closely with other regulators as part of the Digital Regulation Cooperation Forum (DRCF, which comprises the ICO, the Competition and Markets Authority (CMA), Ofcom (the communications regulator in the UK, responsible for regulating the TV, radio, telecommunications and postal industries) and the Financial Conduct Authority).
In April 2024, the DRCF published a Workplan (see here) setting out its planned activities for 2024/2025. The Workplan centres around ten projects:
- DRCF and digital hub;
- AI;
- online safety and data protection;
- digital assets;
- illegal online financial promotions;
- promoting competition and data protection;
- sharing the latest developments on cross-cutting digital issues;
- horizon scanning and emerging technology;
- supervising technologies; and
- skills and capabilities.
The Workplan provides a good indicator of where regulators are focusing their attention, and the areas in which they are trying to support businesses. As discussed in more detail below, the dominant theme in this Workplan (which spans a number of projects) is AI, and especially how regulators can work together to promote responsible AI.
The Data (Use and Access) Bill
The Data (Use and Access) Bill (the “DUA Bill”) is the new government’s version of the former government’s Data Protection and Digital Information Bill (the “DPDI Bill”), which lapsed prior to the last general election. The DUA Bill introduces several amendments to existing data protection laws, with the intention of making the legal framework more user-friendly and accessible for both individuals and businesses.
Details of some of the notable changes can be found here, but a key theme is that the DUA Bill aims to facilitate data sharing, rather than prevent it. For example, the DUA Bill builds on the approach to open banking and creates a framework that aims to ease information sharing between business and regulated/authorised third parties in key sectors such as utilities, transportation and real estate. The DUA Bill also removes some of the more controversial aspects of the DPDI Bill.
Importantly, the DUA Bill does not depart significantly from the UK’s existing data protection regime. Therefore, organisations will only be required to make minor changes to their existing documentation and processes. However, organisations may need to reconsider their complaints procedure given the new requirement for people to complain directly to organisations before escalating to the ICO. Continued alignment with the EU GDPR also reduces the risk of the UK losing its European adequacy status, which was of concern to some commentators when the DPDI Bill was being debated.
Consent or Pay
Regulators and courts in the UK and EU spent 2024 considering the lawfulness of consent or pay business models (more commonly known as Pay or Okay). The ICO published its guidance on consent or pay models in January 2025.
In its guidance, the ICO provided details about how the power balance between a service provider and a user as well as the availability of equivalent alternatives impact on whether consent is freely given. Factors which impact the power balance include whether the consent or pay model is targeted at vulnerable groups of people, the impact of a model on existing users of a product or services or whether the business has a dominant position under competition law. Following the ICO and CMA’s co-operation in 2024 in relation to cookie compliance, specific reference was made to the CMA’s framework to assess market power.
The ICO discussed some methods for assessing whether a fee is appropriate such as revenue, costs and consumer valuation of the core service. However, the ICO ultimately indicated that the most appropriate measure is the value associated by people with using a product or service without sharing their personal information, which is context-dependent and would vary on the user base, user income level and type of information processed.
While Pay or Okay mechanisms are more frequently associated with online news outlets and online streaming services, where users must purchase a subscription or premium membership to refuse personalised ads, most loyalty schemes also operate on a similar basis. For example, at most supermarkets, consumers receive discounts on products in exchange for their personal data. The ICO’s implementation of its guidance may lead to serious evaluation of how these schemes are structured and priced.
Technology on the horizon
In February 2024, the ICO published its second annual Tech Horizons Report (see here), which comments on eight priority technologies that the ICO believes may have a particularly significant impact on societies, economies and information rights in the next two to seven years. These technologies are:
- genomics;
- immersive virtual worlds;
- neurotechnologies;
- quantum computing;
- commercial use of drones;
- personalised AI;
- next generation search; and
- central bank digital currencies.
While the Report is not formal guidance, it is very interesting as it highlights the ICO’s current thinking on emerging technologies and where it perceives the key data protection risks as being.
Artificial Intelligence
Not a month went by in 2024 without new regulatory guidance on AI development and use. 2025 will be no different. On 13 January 2025, Prime Minister Keir Starmer set out the government’s blueprint for turbocharging AI development in the coming decade.
In its 2025 Action Plan, the government said that one of its core principles is to be “on the side of innovators”, which suggests that no UK version of the EU AI Act is on the horizon. A notable point in the Action Plan is that all regulators (including the ICO) will have to publish how they have enabled AI innovation and growth. Should regulators fall short, the government stated that it might make more “radical changes” such as empowering a central body with a “higher risk tolerance” and “statutory powers” to fast-track AI developments through pilot sandbox licences that override sector regulations. This is likely to be welcomed by many businesses who may now be more comfortable with increasing their investment in developing and deploying AI systems in the UK.
From an ICO perspective, in February 2024, the ICO outlined its strategic approach to regulating AI products and services that fall within scope of the UK’s Data Protection Act and the Online Safety Act. The ICO marked the similarities between data protection principles and the principles set out in the previous government’s AI regulation White Paper, and went on to explain how AI principles can be applied in a data protection context. The ICO stressed that its approach will be pragmatic and risk-based, and highlighted key focus areas:
- foundation models;
- high-risk AI applications (such as in education, healthcare, financial services and recruitment);
- facial recognition technology;
- biometrics; and
- protecting children.
During 2024, the ICO published five chapters on generative AI for consultation. This covered:
- the lawful basis for web scraping to train generative AI models;
- purpose limitation in the generative AI life cycle;
- accuracy of training data and model outputs;
- engineering individual rights into generative AI models; and
- allocating controllership across the generative AI supply chain.
On 12 December 2024, the ICO published its response to the series, summarising the key results of the consultation. In a similar vein to the European Data Protection Board, the ICO also took the view that web scraping for generative AI training is a high-risk, invisible processing activity. This means that the use of personal data to train generative AI models is unlikely to pass the balancing test required for developers to rely on legitimate interests as a lawful basis. The ICO also raised concerns about a lack of practical measures enabling individuals to exercise their rights in respect of their personal data.
On 1 May 2024, the ICO completed its first enforcement action against Snap, Inc in relation to its ChatGPT-powered “My AI” chatbot. The ICO expressed concerns that the original data protection impact assessments provided by Snap had not adequately assessed the risks of targeting users aged 13 to 17 for advertising, of processing special category data on a large scale, and of users not understanding how the chatbot processed their personal data. The investigation resulted in Snap taking significant steps to provide more detailed risk assessments and adopting mitigation measures, including protections specifically designed for users aged 13 to 17, in the form of:
- a “Family Centre” allowing for parental supervision;
- just-in-time privacy notices for younger audiences;
- content filtering by age bucket; and
- dedicated controls to limit the chatbot when being used to complete homework.
The ICO did not issue a monetary penalty or reprimand, and was satisfied with the remedial steps Snap had taken. However, the ICO used this as an opportunity to remind organisations to consider data protection from the outset before bringing products to market.
The ICO has taken a proactive approach to working on high-risk AI applications. In November 2024, it published an audit report analysing the use of AI tools in recruitment. One of the recommendations included using pseudonymised personal information, or aggregated information, where possible, noting that where pseudonymised information is used any secondary use must still be compatible with the purpose for which that information was originally collected.
In 2025, the authors expect the ICO to carry out audits looking at technology in education and youth prison services as part of its strategic approach to AI.
Originally published in Chambers Global Practice Guide: Data Protection and Privacy 2025