25 October 2024

Politicians are blocking the UK’s AI potential

By

Economic growth has proven to be stubbornly elusive in the UK since the 2007/08 financial crisis. The average UK year-on-year GDP growth from 1980 to 2006 was about 2.5%. From 2007 to 2023, UK year-on-year GDP growth averaged less than half that, at under 1.2%. Our productivity gains have likewise been woeful. From 1980 to 2006, the average annual percentage change in output per hour worked was about 2.3%. The figure for 2007 to 2023 is not quite 0.5%. There is no way to sugarcoat the current situation: UK productivity and economic growth have done little better than flatline for almost two decades.

Amid this gloomy state of affairs, Artificial Intelligence (AI) offers some hope. The UK punches well above its weight when it comes to AI research and academic achievement, and there are a wide range of AI innovations and research that could boost economic growth. At a time when the new Government is looking for economic growth, these are welcome comparative advantages.

Yet despite these advantages, successive UK governments have failed to adopt policies that put the UK in the best position as an AI leader. The UK is in an enviable position when it comes to establishing innovation-friendly regulation. Brexit is complete, and the UK is free to craft a new regulatory regime free from European Union influence. But we have yet to seize this huge post-Brexit opportunity.

Indeed, for all of the previous government’s emphasis on AI, there was relatively little in legislation and regulation that changed the rules governing AI, or addressed the concerns many people have about AI. Instead, the last government published an AI whitepaper, launched the AI Safety Institute, held the first multinational AI Safety Summit, launched a plan to introduce ‘AI red boxes’ and announced a host of investments in AI.

Meanwhile, elsewhere in the world, countries and international blocs were passing AI legislation and regulation. Much of it, such as the EU’s AI Act, was misguided for many reasons. With such flawed legislation being implemented abroad, it is a shame that the UK did not seize on the opportunity to implement pro-innovation, pro-market legislation and regulation. 

AI safety is important, and there is nothing wrong with the UK being seen as a world leader in the area. But it is not enough, and it may have been a diversion from the need to build a regulatory regime that sets us ahead of the rest of the world. Busy with Sunak’s AI Safety Summit, the previous government did not rush to establish a post-Brexit AI regulatory scheme. In fact, much of the previous government’s approach to technology more broadly, as seen in both the Online Safety Act and the Digital Markets, Competition and Consumers Act, suggested an approach to technology motivated primarily by a desire to bully American companies and to make digital innovation more costly and risky. 

The Sunak government was correct to avoid broad AI legislation, noting: ‘Introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from AI in line with the adaptable approach set out in the AI regulation white paper’. This acknowledgement of the pacing problem was welcome. AI is too broad and fast-changing for a body of legislation to be effective without being burdensome. Nonetheless, as I noted in the Centre for Policy Studies paper ‘Regulating for Growth’, there is a way for governments to implement a pro-innovation and pro-growth regulatory agenda: by implementing narrow and well-defined safety standards rather than technology-specific requirements. Such an approach would have allowed for a wide range of industries to use AI tools without having to rely on compliance with a top-down AI law.

Instead of taking this approach, previous Conservative governments failed to meet even their own goals and deadlines on AI-related innovation. Take driverless cars, for example. As I pointed out in ‘Regulating for Growth’: ‘In 2017, Philip Hammond announced that the UK would have fully driverless cars by 2021. In the Autumn Budget, the then Chancellor pledged that the Government would create “the most advanced regulatory framework for driverless cars in the world” and stated that it wanted “to see fully self-driving cars, without a human operator, on UK roads by 2021”. That year came and went without an advanced regulatory framework in place and without driverless cars on the road. The current government’s plan is to have an autonomous vehicle safety framework in place in 2025.’ As things stand, we’ll be lucky to see autonomous cars on our roads in 2026

While large language models (LLMs) such as ChatGPT have been attracting many of the headlines associated with AI in the past few years, AI encompasses much more than chatbots and search engines. Too many of the policy discussions about AI seem to inevitably descend into debates about LLMs, artificial general intelligence (AGI) or safety issues. These are each worth discussing, but not at the expense of delivering the practical rules the wider sector needs. Many investors, entrepreneurs and researchers are simply seeking an AI regulatory framework that provides clear guidelines. Unfortunately, the previous government left office without any such guidelines in place.

The new Labour Government has developed its own AI strategy, in place of the previous government’s emphasis on AI safety. However, it is not clear that the new Labour strategy will yield the UK AI revolution – and the accompanying boost to UK growth – that so many want. One of the Government’s first legislative goals, announced in the King’s Speech, is an AI Bill. Months later, the consultation for the Bill has yet to launch.

The AI Bill is only part of Labour’s AI strategy, but the other measures are similarly underwhelming. It has cut some commitments made by the previous government, such as investment into a University of Edinburgh supercomputer and is reportedly considering scrapping a San Francisco office for the AI Safety Institute. Instead, its focus appears to be the implementation of AI into public services, with the Government hoping to inject more AI tools into the public sector in order to boost public sector efficiency.

Using AI to boost public sector efficiency is not a new idea, however. During the last government, then minister Alex Burghart announced government plans to use AI to improve Whitehall’s efficiency. 

It is also worth keeping in mind that the public sector has a poor track record when it comes to technology-fuelled efficiency. According to the Office for National Statistics, between 1997 and 2019 public sector productivity grew at an average rate of 0.2% a year. That the widespread use of email, the emergence of the smartphone and the development of troves of software designed to make work easier have not improved public sector productivity should adjust our expectations surrounding AI. 

The Government has launched two big-picture initiatives. First, the AI Opportunities Action Plan, led by Matt Clifford, co-founder and chair of Entrepreneur First and chair of the Advanced Research and Invention Agency. This will propose ways to seize on the opportunities in the UK for AI to boost economic growth and productivity. The second new initiative is the Regulatory Innovation Office (RIO), which is intended to fast track new and emerging technologies to market. One of its stated goals is to help implement AI innovations in the NHS, which fits with the Government’s broader strategy of using AI to help make the public sector more efficient. 

It is too soon to tell whether the Government’s AI Opportunities Action Plan or the RIO will yield the benefits so many are hoping for, but the history of similar initiatives in the past should temper our enthusiasm. Both stop short of the pro-innovation regulatory reform the entire sector needs.

While the strategies of the current and previous government leave much to be desired, policymakers can take comfort from the fact the UK remains a global leader in AI research. As the global race for AI leadership intensifies in the coming years, the UK will be well-placed to establish itself as a world leader in AI legislation, regulation and investment.

It is not as if the UK is starting this race flat-footed. This year, the Nobel Foundation awarded the chemistry and physics Nobel prizes to five researchers who made pioneering contributions in AI, three of whom were British. As you might expect, the winners each have a few degrees from excellent universities. They have 12 degrees between them, five of them issued by three world-class UK universities (the University of Edinburgh, Cambridge University, and University College London). 

Also of note is that half of the chemistry Nobel was awarded to two employees of a London-based AI lab, Google DeepMind, which has been on the forefront in developing AI tools that can predict the structures of proteins. Thanks to our universities’ academic excellence, the City’s deep capital market and the innovative startups that the UK produces, we are making a name for ourselves as a global AI leader, behind only the USA and China in AI investment.

The UK is in a great position to establish itself as one of the best places in the world to launch and grow AI companies, but our success is not a historical inevitability. If the Government is serious about establishing the UK’s global leadership in AI, the best course of action is to give our entrepreneurs the clarity and the freedom they need to innovate. That means reforming the regulatory state – so that it is technologically neutral and focuses on preventing well-defined and narrowly selected harms.

At a time when Keir Starmer’s Government seems to be struggling to find the economic growth Labour campaigned on, this is an opportunity the UK cannot afford to miss.

You can read ‘Regulating for Growth’ in full here.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matthew Feeney is Head of Tech and Innovation at the Centre for Policy Studies.