3 March 2024

Weekly Briefing: The deepfake debacle

By

This year will be defined by elections. Elections in the US, India, Brazil, Mexico, Indonesia, Bangladesh, Germany, the UK, Spain, South Korea, and many other democracies around the world – involving an estimated 2 billion eligible voters.

And in all of those democracies, people are getting worried about deepfakes. Home Secretary James Cleverly recently warned that criminals and foreign adversaries could use such content to interfere in this year’s General Election, and he’s not the only one.

Yet while the potential for such interference is a serious concern, lawmakers should be hesitant to allow such concerns to motivate an aggressive crackdown on a deepfake content.

Deepfake content is created using Artificial Intelligence (AI) technologies such as Generative Adversarial Networks (GANs) and autoencoders. GANs are made up of generators and discriminators training themselves in a feedback loop of offence and defence. Autoencoders are neural networks tasked with compressing and reconstructing data. These technologies can be used to create realistic but fake pieces of video or audio content.

Such content can be used for comedic effect or to conceal the identities of persecuted minorities who appear in documentaries. At least one politician has used deepfake technology to reach a wider audience by altering a political campaign video to show him speaking English rather than Haryanvi.

Unfortunately, the educational, artistic and documentary use of deepfakes are often overshadowed by more nefarious uses, which account for much of deepfake content – including non-consensual pornography, politically motivated disinformation, and scams. Many victims of deepfake pornography have reported the horrific effects of the spread of the content, including the onset of severe depression, anxiety, and suicidal thoughts.

The American presidential election has already served as a venue for deepfake disinformation, with voters in New Hampshire receiving fake phone calls supposedly from President Joe Biden urging them not to vote in the Democratic primary and to ‘save’ their vote for the presidential election. Only a few weeks ago, a financial professional wired $25m to fraudsters who had used deepfake technology to make it appear as if he was on a video call with colleagues.

Lawmakers around the world have taken steps to tackle these harms. In the UK, the Online Safety Act made sending deepfake pornography a criminal offence. In the US, a number of states have passed bans on sharing deepfake pornographic content, even though they may run afoul of the First Amendment.

In China, broad restrictions on deepfakes that require disclosure of altered media cover deepfake pornography.

Many are persuaded that restrictions on deepfake pornography are justified on the grounds that the content contributes little to important debate, humiliates innocent people, and harms reputations. However, restrictions on deepfake content that portrays politicians rests on shakier ground.

Ridiculing politicians and other powerful figures is one of the most protected categories of speech. Even in the UK, hardly the bastion of free speech many claim it is, those seeking to criticise politicians enjoy broad protections.

Broadly speaking, approaches to deepfake regulation fit into two buckets: one that targets specific uses of deepfake technology, and another that takes aim at the technology itself. Both of these approaches are ripe with costs and benefits, but an examination of both shows that the former approach is preferable to the latter.

Targeting the use of technology rather than the technology itself is the approach taken by governments all over the world when it comes to speech regulation. That criminals can use books, television, phones, radio, and online platforms to commit crimes does not warrant a ban on these methods of delivering content. Rather, governments define categories of speech that are prohibited.

Yet there are some advocates who are keen to ban the use of deepfakes writ large. For example, the Control AI campaign seeks to ‘Ban deepfakes’. One response to such a suggestion is to ask, ‘What about using deepfakes for satirical purposes? What about actors licensing their images for deepfake content?’ Control AI answers these kinds of questions by defining deepfakes so as to exclude such valuable uses of deepfakes.

But a definition of deepfakes that is designed to only include particular uses is likely to cause legal and regulatory confusion. While precedent exists in the Online Safety Act for defining deepfakes and non-consensual pornography, drafting a law that can capture deepfake content that ‘undermines democracy’ (which Control AI includes in its own definition of Deepfakes) without enveloping valuable political speech will be a challenge.

Remember: countries like Russia will not be deterred from deploying deepfakes during the general election by threats of criminal penalties. This fact may motivate some people to argue that we should therefore punish online services that host such content. But that will likely result in social media companies such as Meta, YouTube, and TikTok embracing false positives as they race to remove disinformation.

Given the massive scale of user-generated content uploaded to popular platforms, it is inevitable that false positives will occur. Some might argue that this is a cost worth incurring. But there are already a range of tools available to detect deepfakes, and we should expect them tools to improve as deepfakes become more widespread.

The history of communications technology is full of examples of concerns about new technology turning us into gullible consumers ready to believe anything we hear, see or read. As the private sector and government continue to grapple with the spread of deepfakes, we should be mindful of the potential unintended consequences of regulation, and the risk that legitimate speech could also be targeted. But I suspect that won’t hold politicians back when – not if – the first election-related deepfakes go viral.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matthew Feeney is Head of Tech at the Centre for Policy Studies.