News
Aug 21, 2024 by Foresight
Criminalising Deepfakes and The Evolution Of AI Regulation In Criminal Law
Whilst the rise of artificial intelligence (AI) has brought significant advancements across various sectors, it has also introduced new challenges, particularly in the realm of criminal law, with one of the most pressing issues being the use of deepfakes - AI-generated media that can convincingly mimic real people, often with malicious intent.
The UK's recent legislative developments, especially following the Online Safety Act of 2023, highlight the growing recognition of the dangers posed by deepfakes and the steps being taken to criminalise their harmful use.
Here, we outline those steps, beginning with:
The legislative response of the Online Safety Act and beyond
The Online Safety Act 2023 marked a significant step in addressing the dangers posed by deepfakes, particularly those of a sexually explicit nature.
Effective from January 31st 2024, the Act introduced an offence for sharing intimate images, including deepfakes without consent, with the legislation being a response to growing public concern about the misuse of AI in creating realistic yet fabricated content, often used to harass or humiliate individuals.
However, the issue of creating deepfakes remained unaddressed until the UK Government proposed an amendment to the Criminal Justice Bill in April 2024. This proposal aimed to criminalise the creation of sexually explicit deepfakes - a move that underscored the government’s commitment to protecting individuals from the psychological harm that these AI-generated images can cause. And although the prorogation of Parliament on May 24th 2024, temporarily did halt the progress of this Bill, the existing offence for sharing deepfakes remains in force, still signalling a strong stance against this form of digital abuse.
The growing threat of deepfakes
Whilst initially, technology was celebrated for its creative applications in entertainment and advertising, deepfakes have rapidly evolved from a novel technological curiosity to a tool for potential harm.
For example, where brands have used deepfakes to create engaging and personalised content for consumers, there is a darker side of this technology too, particularly with the rise of deepfake pornography and disinformation campaigns.
Because of this, the impact of deepfakes on individuals can be devastating. The release of deepfake pornographic images of public figures, such as the widely reported case involving singer Taylor Swift in January 2024, highlighted the severe emotional and reputational damage that can result from such abuse. As a response to these incidents, there has been a public outcry for more robust legal protections against the creation and dissemination of deepfakes.
The UK’s regulatory approach
The UK Government’s decision to criminalise the creation of sexually explicit deepfakes reflects a broader trend in its approach to regulating AI and online safety, with the proposed offence under the Criminal Justice Bill targeting individuals who create or design intimate images of another person using digital technology with the intent to cause harm. Importantly, this offence does not require the image to be shared for it to be considered a crime, addressing the significant harm that can occur even when such images remain private.
This legislative move is also consistent with the UK’s technology-neutral approach to AI regulation, as outlined in the Government’s AI White Paper. Here, we see that rather than creating AI-specific laws that may quickly become outdated, the UK has opted to integrate AI regulation into existing legal frameworks, taking a strategy which aims to provide flexibility and ensure that laws remain applicable as AI technology evolves.
International perspectives on deepfake regulation
The UK is not alone in grappling with the challenges posed by deepfakes, as the European Union, through its AI Act adopted in March 2024, has also taken steps to regulate the use of AI-generated content.
For the EU, their approach includes mandatory disclosure requirements for users of AI systems that generate deepfakes, ensuring that such content is clearly labelled as artificially created. And whilst the EU’s measures stop short of banning deepfakes outright, they do represent a significant step towards transparency and accountability in the use of AI.
In contrast, however, the United States has yet to implement comprehensive federal legislation specifically targeting deepfakes, though here are ongoing legislative efforts, such as the Preventing Deepfakes of Intimate Images Act and the No FAKES Act which aim to address the issue at a federal level, which indicate a growing recognition of the need for legal frameworks to combat the misuse of AI-generated content.
What’s more, China has taken a much more direct approach, enacting regulations that specifically target deepfakes following several high-profile scandals. The removal of the viral deepfake app ZAO from app stores is one example of China’s swift action in response to concerns about the potential for abuse of this technology.
Challenges and opportunities
As AI continues to develop, the challenges associated with deepfakes are unfortunately, likely to grow.
In anticipation of this, the UK Government’s commitment to monitoring and updating its legal frameworks, as evidenced by the ongoing discussions surrounding the Criminal Justice Bill, is a positive step towards addressing these challenges. However, the effectiveness of these measures will depend on their implementation and enforcement.
As well as this, the UK’s approach to regulating deepfakes and AI more broadly is likely to evolve in response to technological advancements and changes in government too. For example, where the general election in July 2024 saw Labour appointed and Sir Kier Starmer take to the helm as Prime Minister, their change in leadership could also bring shifts in policy. What’s more, their ongoing dialogue between the UK and international partners, such as the EU and the US, will also play a crucial role in shaping the future of AI regulation too.
Balancing innovation and protection
The rise of deepfakes presents a complex challenge for lawmakers, as on one hand, AI technologies offer immense potential for innovation and creativity, whereas on the other, they pose significant risks to privacy, reputation and public trust.
Though the positive shift is that the UK’s recent legislative developments, particularly the criminalisation of creating sexually explicit deepfakes, reflect a growing recognition of the need to balance these competing interests.
As the challenges of AI regulation continue to evolve, it will be essential to ensure that legal frameworks remain flexible and responsive to new threats. By doing so, governments can protect individuals from the harms of deepfakes while still fostering an environment where AI can be used responsibly and ethically.
The road ahead may be challenging, but with careful planning and international cooperation, it is possible to harness the benefits of AI while mitigating its risks, and we’re proud to say that Foresight are here to help.
About us
Foresight brings the UK’s most comprehensive panel of expert witnesses to support immigration, family and criminal law cases. As an industry-leading provider, we help legal professionals save time and work smarter by sourcing the most suitably qualified and highly experienced expert to support their case within LAA rates, if required - no matter the discipline, no matter the deadline.
If you would like to talk to our team or to find out more about our services, reach out today.
FIND YOUR
EXPERT WITNESS
CALL OUR TEAM ON
0330 088 9000
NEWSLETTER SIGN-UP
Stay up-to-date with all the latest news in the industry by signing up to our newsletter. You're welcome to unsubscribe at any time and we'll always treat your personal details with the utmost care.