30 October 2023

The AI fraudsters are coming – we need to act now

By Richard Hyde

This week the UK is hosting a global summit on AI safety, with many international leaders and executives in attendance. The potential existential threat posed by unconstrained artificial intelligence has certainly garnered a lot of attention since the emergence of ChatGPT in late 2022 and the publication of a warning letter by hundreds of AI company executives, researchers and scientists.  

Less covered, but important nonetheless, is the risk of AI-enabled crime, with the potential for more powerful tools falling into the hands of criminals. This prospect is particularly worrying because crimes such as fraud are already being committed at an epidemic scale, with relative impunity. Introduce AI into the mix, and the situation looks a lot more worrying.

The FBI is already warning of the deployment of AI by organised crime to generate ‘synthetic content’ such as deepfakes for ‘spear phishing’, where fraudulent emails or texts are sent from ostensibly trusted sources, or ‘social engineering’ to exploit people’s trust. Through the dark web, AI tools for criminality are beginning to be made available to fraudsters, including one called FraudGPT

There are recent indications that these AI-related crime challenges are beginning to be taken note of in government. However, in order to be leaders in the effort against future AI-enabled fraud, the UK needs to first ensure its response to the current fraud emergency is effective, and make the country the most hostile environment for fraud in the world. To date, the UK’s approach is widely considered to be inadequate. And, while the Government’s recent fraud strategy offers some useful steps forward, its ambitions and proposals fall far short of what is needed if the present threat is to be substantially reduced. 

Without the foundations of a successful counter-fraud approach against the large-scale fraud being perpetrated now, politicians, policymakers, law enforcement, regulators and the private sector are unlikely to be able to mount a positive response to AI-enabled fraud. Ultimately, the risk is that the UK ends up repeating the mistakes that allowed the current fraud emergency to develop, with insufficient action coming too late in the day. 

To outline how it might do better, we at the Social Market Foundation [SMF] recently published a report outlining the key elements of a more ambitious counter-fraud effort.  

The report summarises the discussions at two expert roundtables [the first of which was co-convened by SMF and Stop Scams UK] where we heard from senior politicians and regulators, representatives from the financial, telecoms and technology industries, as well as consumer and business groups. 

The good news is that there is substantial consensus about what needs to be done to beat fraud. 

First, we need to focus on prevention and get financial services, technology and telecoms companies to buy into a proactive approach towards fraud. Second, we need better public education about fraud so that people are better prepared to spot it and take steps to reduce their exposure. 

Third, we need to build a better intelligence infrastructure so the right actors can take action at the right time. This would require enhanced and accelerated data sharing across the organisations in the fraud chain and between the public and private sectors. 

A vital prerequisite for such steps however, is sufficient focus and resolve to prioritise fraud, and for all the relevant parties across the public and private sectors to set aside their differences to cooperate. This collaborative spirit has been sorely lacking up until now. If there was such determination at the top of politics and from the boardroom downwards – crucially underpinned by adequate resourcing – experts are confident that the UK can get on top of the current problem. That, in turn, would put us in a position to face the forthcoming AI-enabled fraud challenge with confidence and a good chance of success. 

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Richard Hyde is a Senior Researcher at the Social Market Foundation

Columns are the author's own opinion and do not necessarily reflect the views of CapX.