17 June 2024

The deepfake general election


We are in the midst of the UK’s first deepfake general election. Although only a few weeks old, you may have already seen a number of deepfakes (realistic video and audio content generated via deep learning AI techniques) portraying candidates in a less than favourable or humorous light.

While deepfakes no doubt have valuable artistic and satirical applications, their use by criminals and abusers has attracted the attention of lawmakers all over the world.

Lawmakers can act to ensure that the government is better-informed about deepfake threats, but in a new paper published this week, we argue that they should resist the urge to tackle deepfake harms via sweeping legislation that tackles a content-creation technology rather than the makeup of the content itself.

Unfortunately, the majority of deepfake content is harmful. Such content includes fake “revenge pornography” as well as content used by criminals to facilitate blackmail and fraud. The cost of these harms include not only financial losses (e.g. fraudsters using deepfakes to set up a fake video call and convincing an architecture firm employee to wire them $25.6m) but also the deterioration of mental health among victims of deepfake ‘sextortion’ schemes. That is before we get to the political deepfakes or other AI-generated content which keeps cropping up during the general election campaign, and will only become more common.

Some of this content is harmless, such as a satirical video portraying Rishi Sunak outlining troop deployments in the computer game Fortnite, in order to lampoon his proposals for National Service. Or a satirical TikTok showing Sunak saying that he could not care less ‘about energy bills being over £3,000’. But some of it is much less so. There was the video purporting to show Wes Streeting, the Shadow Health Secretary, calling Diane Abbott a silly woman during a Politics Live appearance. And a video that appeared to show Labour North Durham candidate Luke Akehurst – a particular hate figure for the Corbynites – using crude language to mock constituents.

This kind of content is unlikely to swing an election, but it does undermine the process. It is not hard to imagine a sufficiently motivated criminal or foreign adversary seeking to spread election disinformation, undermine faith in democratic institutions or exploit political fractures in our society.


On top of which, trust in institutions in Britain is already on the rocks According to a survey conducted by King’s College London, only 13% of people in the UK had a great deal/quite a lot of confidence in the press, the second lowest level of the 24 countries in the report. A majority of British people do not trust political parties and almost half do not trust Parliament. Such a society is ripe for election interference.

In our new report, Facing Fakes, I outline why legislation need not tackle the technology itself in order to address deepfake harms. When it comes to some of the most worrying harms – such as the criminal uses of deepfakes – there is already a robust body of law to turn to. Blackmail, fraud, revenge pornography and the spread of election disinformation are already illegal.

If existing law is not adequately up to date thanks to the emergence of deepfakes, then politicians should propose relevant amendments rather than turn to sweeping deepfake transparency requirements such as those seen in the EU’s AI Act, which risk imposing costs on businesses that create deepfakes or allow users to create deepfakes while entrenching market incumbents.

The UK is already home to the AI Safety Institute. We argue that it would be an ideal venue for discussions about the state of current law and could also be home to a deepfake taskforce made up of industry experts as well as representatives from the Home Office, the intelligence community, and Ofcom, which since the passage of the Online Safety Act has become the key social media regulator. Such a task force would be able to keep the British government up to date with the relevant deepfake threats, deepfake detection methods, and emerging risks.

Deepfakes are here to stay. The global nature of the internet and deepfake technology should make us hesitant to expect that legislation will be able to make a significant difference to the spread of harmful deepfake content. That doesn’t mean that government should be idle – just that it should approach this new technology, and any new technology, with an open mind.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matthew Feeney is Head of Tech and Innovation at the Centre for Policy Studies.