• Sectors we work in banner(2)

    Quick Reads

Digital Deception: The Rise of Deepfakes

Deepfakes are manipulated audio, video or images that use artificial intelligence (AI) to create highly realistic content that can be difficult to distinguish from reality. The term “deepfakes” is derived from the use of deep learning techniques. Deep learning represents a subset of machine learning techniques which are themselves a subset of artificial intelligence.

In machine learning, a model uses training data to develop a model for a specific task. The more robust and complete the training data, the better the model gets. In deep learning, a model is able to automatically discover representations of features in the data that permit classification of the data. They are effectively trained at a “deeper” level. [1]

Deepfakes actually represent a subset of the general category of “synthetic media” or “synthetic content.” Synthetic media is defined as any media which has been created or modified through the use of AI or machine learning, especially if done in an automated fashion.

While this technology certainly has the potential for positive applications, the misuse of deepfakes present new and complex challenges for both individuals and businesses alike. 

Reputational Risks 

Businesses need to be aware of the potential of deepfakes to spread misinformation about a particular topic, industry, or person, or particular entity. Deepfake technology can be used to create convincing videos of CEO’s and other public figures saying or doing things that haven’t actually occurred, inflicting both serious financial and reputational damage. 

As we have seen in the recent case in Hong Kong, deepfakes are increasingly being used to commit financial crimes by impersonating individuals within a company in order to obtain sensitive information. An employee in a multinational firm’s Hong Kong office was duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations. Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars (about $25.6 million) to the fraudsters. 

Claims for Defamation 

The increase in the creation and dissemination of malicious deepfake content will likely also lead to an increase in the number of defamation claims. However, the context in which this deepfake content is produced will likely play an integral part in the success of any claims.  For example, claims surrounding content that was intended as a parody would be unlikely to succeed. Although if the reasonable viewer is not aware of a video’s falsity, for example, it may be possible to bring a claim against the creator and/or publisher of the video, such as the host website.  

AI in Hollywood 

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) strike in 2023 also highlighted some of the unresolved legal and reputational issues for talent which has been brought about by the increased use of AI in the entertainment industry.

The use of this technology has its benefits though the creation of realistic digital characters, the enhancement of special effects and even through creating entire story lines. However, this progress has also given rise to the creation of AI-generated content that blurs the lines between reality and fiction. Correspondingly, questions are raised around consent if, for instance, a production company unilaterally regenerates an actor’s likeness and around remedies for musicians if they (or their work) are recreated using AI technology without their permission. We are likely to see some interesting cases down the road as courts try to address these issues. 

Data protection

The implications of deepfake technology also extends into the realm of data protection. It is arguable that in processing the personal data required to create a deepfake the creator is a controller who is subject to strict obligations on how the source material is processed. In the absence of any lawful basis for processing an individual’s face and voice, the creator may be liable. 

Intellectual property

A deepfake may also breach intellectual property (IP) rights such as copyright, which may be relevant where other original works have been substantially copied in a deepfake creation. AI technology needs to be trained to know what the individual who is the subject of the deepfake looks like. It does this by combing the internet for photos, music or videos of the person it is copying. However, it is the owners of the copyright in the photos or video who will have a cause of action for infringement if their works are copied without their permission rather than the individual subject (unless they are the said copyright owners). 

Future Considerations 

The rise of deepfakes presents complex legal and operational issues for businesses that require a multifaceted approach. Science and technology are constantly advancing. Deepfakes, along with automated content creation and modification techniques, merely represent the latest mechanisms developed to alter or create visual, audio, and text content. The key difference they represent, however, is the ease with which they can be made – and made well. 

Businesses should consider conducting a review of their current policies and procedures and implement more robust policies and procedures to be able to verify the authenticity of audio, video, and other media content before relying on it for important decisions. Technological solutions, such as digital watermarking, and blockchain authentication, can also aid with the detection and prevention of the spread of deepfakes. By embedding these technologies into disseminated media content, it can become easier to track its origins and verify its authenticity. 

We have already started acting on projects involving the use of AI in aggregation tools and the AI replacement of primary talent in existing television commercials, for example. As the use of deepfakes looks set to continue to grow, it is important to take proactive steps to safeguard against their misuse. 

[1](United States Department of Homeland Security, 2023)

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.

Our thinking

  • Joseph Evans, Cassidy Fan and Jessica Boxford write for New Law Journal on the future of insolvency: a digital asset revolution

    Joseph Evans

    In the Press

  • Law 360 quotes Stewart Hey on the potential integration of the PSR into the FCA and the impact on APP fraud reimbursement

    Stewart Hey

    In the Press

  • Kevin Gibbs and Sadie Pitman write for CoStar on the need for investment in power infrastructure to support new data centres

    Kevin Gibbs

    In the Press

  • New code of practice for the cyber security of AI development

    Rebecca Steer

    Quick Reads

  • EU Design Legislation Updates

    Matthew Clark

    Insights

  • Extra Time: The business of women’s football in Africa

    Sarah Johnson

    Podcasts

  • Ilona Bateson speaks at an event hosted by TheIndustry.fashion on the challenges and opportunities for fashion retailers in 2025

    Ilona Bateson

    In the Press

  • Swiss Anti-Corruption Laws: A Guide to Bribery Offences, Compliance, and Penalties

    Daniela Iselin

    Insights

  • Mary Bagnall writes for FMCG CEO on the recent Thatchers v Aldi court ruling

    Mary Bagnall

    In the Press

  • Up In The AI: Gen AI and In-house Teams

    Joe Cohen

    Podcasts

  • ESMA Consultation on Guidelines for the criteria to assess knowledge and competence under MiCA

    Charlotte Hill

    Insights

  • Up in the AI: Gen AI and Access to Justice

    Joe Cohen

    Podcasts

  • EU AI Act: Key provisions now in force

    Racheal Muldoon

    Insights

  • EU Designs: Upcoming increases in renewal fees and amendments to renewal deadlines

    Charlotte Duly

    Quick Reads

  • The FCA’s requirements for Payments Firms

    Charlotte Hill

    Insights

  • Maintaining the Integrity of Sport – Time for AI to Take the Lead ?

    Darren Bailey

    Quick Reads

  • Digital Securities Sandbox Update

    Racheal Muldoon

    Insights

  • Property Patter: Challenges for commercial property in 2025

    Emma Humphreys

    Podcasts

  • Up in the AI: Gen AI and law firms

    Joe Cohen

    Podcasts

  • Mind the Gap Trade Mark

    Charlotte Duly

    Insights

Back to top