5 November 2023

Weekly Briefing: A busy week for AI


It has been a busy week in the world of Artificial Intelligence (AI). American President Joe Biden signed an AI executive order (EO), and the British government’s long-awaited AI Safety Summit kicked off in Bletchley Park. What the long term effects of the EO and the summit are remain to be seen, but both provide an insight into how some of the most powerful and influential governments are thinking about a potentially civilisation-changing technology. 

President Biden’s 100+ page EO is ambitious – with its headline goals being to protect privacy, set new standards for AI safety, boost innovation, safeguard civil rights, and improve government use of AI. To achieve its goals, the EO demands a range of federal agencies complete a number of tasks. ‘Agencies that fund life-science projects’ will establish standards to mitigate the threat of AI being used to engineer dangerous biological material The Departments of Energy and Homeland Security will ‘address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks’. And the National Institute of Standards and Safety will develop ‘rigorous standards for extensive red-team testing to ensure safety before public release’.

In addition, the EO requires federal agencies to write reports on the effect of AI on the labour market and asks Congress to pass data privacy legislation. It also includes a call to make high skilled immigration easier.

As R Street Institute Resident Senior Fellow Adam Thierer put it: ‘Taken together with other recent administration statements, the EO represents a potential sea change in the nation’s approach to digital technology markets’. Thierer went on to warn that the EO’s many requests and mandates increases the risk of ‘death by a thousand cuts’ for AI policy. 

On this side of the Atlantic, the government has wrapped up its AI Safety Summit, which featured representatives from 27 foreign countries as well as policy experts from dozens of companies, think tanks, and academic institutions. 

Among the highlights of the summit was the Bletchley Declaration signed by 28 countries and the European Union. The declaration affirms that anyone working on highly capable AI systems has a responsibility to ensure their safety. Signatories went on to commit ‘to support an internationally inclusive network of scientific research on frontier AI safety […] to facilitate the provision of the best science available for policy making and the public good’. 

Towards the end of the summit, Prime Minister Rishi Sunak announced that Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K. had agreed to test AI models from eight companies (Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and Open AI) before deployment.

Sunak also revealed that governments from the attending countries had agreed to set up an advisory panel for AI risks modelled on the Intergovernmental Panel on Climate Change. It looks as if the AI Safety Summit is the first of at least two more. South Korea and France will be hosting the next two, with the next one to take place in six months.  

While many of these steps are welcome, as Mimi Yates argued in CapX this week, it is vital that governments strike a balance between safety and innovation when it comes to the regulation of this revolutionary technology.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matthew Feeney is Head of Tech at the Centre for Policy Studies.