3 November 2023

Policymakers must walk the AI tightrope between safety and innovation

By Mimi Yates

The Prime Minister is hoping that history will once again be made at Bletchley Park, the site of the inaugural AI Safety summit. In an age of global insecurity, it is welcome to see world leaders and industry experts gather to promote international collaboration and safety measures for our futures.

Michelle Donelan, Secretary of State for Science, Innovation and Technology (DIST), referred to ChatGPT as a ‘sputnik’ moment for humanity, in which we are seeing accelerating investments in frontier AI systems. Frontier AI mainly describes Large Language Models (LLMs) such as ChatGPT, the recent debut of which has sparked an urgent drive to tackle the complexities of this new technology. The government is absolutely right to focus frontier AI – but it may leave us at risk of paying insufficient attention to ‘narrow’ AI.  Narrow AI, as the name suggests, focuses on a specific task, and could revolutionise healthcare and productivity – rendering poverty and many terminal diseases conditions of the past.

Delegates at the Summit have been tasked with navigating a fine line between premature, restrictive regulations that could curb innovation, and the necessity to establish ethical safeguards. Crafting policies that prioritise both safety and innovation is a delicate balancing act. Emphasising the former could mean that we risk missing out on some of the extraordinary benefits that AI could bring to our everyday lives. However, putting the latter before the former could carry existential safety implications.

The creation of the AI Safety Institute and the newly announced Bletchley Declaration is a good start. Although the Bletchley Declaration falls short of offering concrete policy measures, this is understandable given its applications are currently largely unforeseen, and reflect the complexity of understanding nations with differing regulatory interests and legal frameworks. And while it is right that world leaders are taking safety so seriously, dwelling aimlessly on the risks of a Blade Runner-like dystopia post-summit might stall immediate progress.

But perhaps the biggest win is the investment in AI skills. Ahead of the summit, DSIT announced a series of scholarships and funding for new Centres of Doctoral Training in AI, as well as a new visa scheme to push students to take on AI specific courses. Coupled with increasing those working directly on advancing frontier AI, it will also ensure we train up the top AI safety researchers.

The government should capitalise on the positive steps it has made over the summit – ideally, through a multilateral International Agency for Artificial Intelligence, as suggested by The Adam Smith Institute. Within this, the UK can take on a leading role, and include experts from AI, but also lawmakers, politicians and people with a deeper understanding of AI ethics and morality.

We must also take a step back and examine the state of other sectors, like housing. In order to make the UK more attractive for incoming talent, planning reform must be part of the conversation. Sky high house prices are not attractive to would-be AI researchers- nor is the lack of lab space.

Fundamentally, the Summit has shown an emerging international consensus that AI is a huge governance and policy challenge. Recognising this is a start, but there is still much to do if we want to unleash the UK’s potential to become an AI superpower.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Mimi Yates is Director of Engagement and Operations at the Adam Smith Institute

Columns are the author's own opinion and do not necessarily reflect the views of CapX.