Should making fun of politicians be illegal? According to some legislators, it should be.
Elon Musk recently shared a video featuring the voice of presumptive US Democratic presidential candidate and current Vice President Kamala Harris. In the video, Harris can be heard claiming (among other things) that she was a ‘diversity hire’ pick for the nomination. While Harris’ voice sounds authentic, it is a product of an AI technique often described as a ‘deepfake‘.
That Musk shared the video with his almost 200 million followers on X (formerly Twitter) is raising questions about how governments should regulate deepfake content. California Governor Gavin Newsom has already said that the kind of AI voice manipulation of ads on display in the Harris video should be illegal and that he plans to sign legislation to that effect in the coming weeks.
While the video Musk shared is making headlines, it is worth noting that it does not represent the worst kind of political deepfake content. The video is obviously a parody. No serious person thinks that Harris would say what she is portrayed as saying in the video. The goal of the video is comedy rather than the spread of misinformation or voter suppression.
In liberal democracies, free speech laws are generally very protective of citizens parodying or otherwise making fun of politicians. Indeed, the California law that is already on the books makes an explicit exemption for ‘audio or visual media that constitute satire or parody’. The law requires that any ‘materially deceptive’ deepfake featuring a candidate 60 days before an election include a disclosure informing viewers that the content is fake.
In the UK, there is no election-specific deepfake law. But, as I noted in a Centre for Policy Studies paper released earlier this year, there are already laws in effect concerning the integrity of elections that would cover the use of deepfakes to deceive voters. Nonetheless, as videos similar to the one shared by Musk proliferate, we should expect for lawmakers to call for legislation in response.
But these lawmakers should proceed with caution.
Anyone seeking to write laws designed to prevent deepfake election misinformation while also allowing for political parody will find very quickly that one person’s satire is another’s nefarious misinformation campaign. The creation of a grey area between ‘parody’ and ‘election misinformation’ will have an undesirable effect: it will stifle valuable political commentary and parody as content creators err on the side of caution to stay on the right side of legislation. In addition, it is unlikely to hamper efforts by well-resourced and motivated actors (such as foreign adversaries) who do not have to fear the reach of British justice.
In the US, there is a good argument to be made that laws such as California’s run afoul of the First Amendment. Over here, legislators do not face that barrier. If they really wanted to, MPs could look to amendments to the Online Safety Act as one option to tackle deepfake election misinformation.
The Online Safety Act already addresses the use of deepfakes in some instances, but it does not tackle political deepfakes. The law criminalises the non-consensual sharing of intimate AI-generated content. The previous government, not content with the mammoth Online Safety Act, proposed adding a new deepfake offence into the Criminal Justice Bill, which did not pass before this year’s general election. The new offence would have criminalised creating sexually explicit deepfake content, regardless of whether the content was ever shared or sent.
Legislation might be tempting to lawmakers facing online threats, but they should consider that the emergence of deepfakes, like the arrival of all other technologies, will be accompanied by an awkward period where institutions struggle to grapple with how best to tackle inauthentic content.
The rise of photography, filmmaking, radio, photo editing software, and the internet have all given rise to social anxiety, moral panic, and overreaction. History doesn’t repeat itself, but it often rhymes, and lawmakers should be aware that we are in the midst of a period of concern that is very similar to those we have seen before.
Deepfake content is too young for us to gauge what its long-term impacts on elections will be. But we should consider that one effect might be that the spread of deepfake content makes the average voter more sceptical about online content. Another effect that we are already seeing is institutions such as newspapers, AI labs and governments using technology to identify deepfake content.
Deepfake detection methods are not perfect, and there will always be gullible people out there. But their existence shouldn’t cloud our judgement. The fact that we have navigated previous eras of technological advancement without democracies falling apart should provide some reassurance. While it might not be in their interest, MPs must resist the urge to over-regulate and jeopardise one of the most important freedoms we enjoy: making fun of politicians.
Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.
CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.