In recent weeks, the endless announcements of new capabilities, new models and new funding has upped the pace of the AI race. The burgeoning AI industry – and Britain’s place in it – is likely to become an increasingly prominent political issue, particularly as and when the technology starts to displace jobs en masse. Labour leader Keir Starmer has staked his flag, promising at London Tech Week to abandon the government’s approach and bring in stronger regulation.
At the moment much of the debate is about regulation and, above all, risk. A one-line open letter, signed recently by leading AI researchers and CEOs, sets out the stakes clearly: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’.
Improved governance will be crucial to safe and prosperous AI innovation, and the UK is uniquely placed to become a global leader on this front. But in order to do that, we need to focus on cooperation, not on regulation.
Why traditional regulation won’t work
Many instinctively turn to regulation to mitigate both the current and, especially, future dangers of advanced AI. To get a sense of the flaws in this approach, look no further than the EU. Their strategy, spearheaded by the AI Act, involves broad regulation for both current and future AI systems.
The problem is that predicting what future AI systems will look like (and be capable of doing) is really hard. Crafting general-purpose regulation means either settling for vague wording and hand-wavy promises, or concrete and inflexible rules and definitions. The former could end up overzealously stifling life-saving innovation and progress. The latter may fail to account for new technological developments and advances. The gravest risk is the illusion of safety leaving us vulnerable to new paradigms of AI or new capabilities that could appear and catch us off guard.
The EU’s regulation-heavy approach stands at odds with the UK’s. Innovation is (quite literally) in the name of the UK’s AI regulation White Paper: ‘A pro-Innovation approach to AI regulation’. Despite not explicitly laying out provisions to deal with advanced AI systems and existential risk, the UK’s framework may be the best path forward to do exactly that. This is because it leaves the door open to cooperation, a crucial ingredient in dealing with future challenges.
Why coordination will work
Collaboration could be our best chance at effectively governing advanced AI models. The field of players at the frontier of AI development is quite small. Only OpenAI, Google DeepMind and Anthropic (and if you’re being generous, Meta and Microsoft) are making moves at the top end of capabilities. This concentration of power at the top may raise competition concerns, but from a safety perspective, it is unambiguously positive. Limited players means feasible coordination.
Rather than casting a wide regulatory net, we can collaborate with labs, understand their model capabilities, and ensure they adopt adequate safety measures. The nuance and compromise needed to achieve this just isn’t possible with conventional regulation, making it vital to establish open communication between government and the labs, and between the labs themselves.
Notably, these labs are keen on collaboration and are seriously invested in finding solutions to potential issues. They may be profit-motivated, but their shareholders aren’t particularly fond of catastrophic AI accidents. Yesterday’s open letter signed by CEOs of all three major labs underscores this commitment. But if you don’t buy their words, just look at their actions. Anthropic, OpenAI and Google DeepMind have all announced swathes of governance and technical safety initiatives recently. Clearly, they take this issue seriously.
Building the table
We could opt for stringent regulation, but that could cost us a place at the table with the big AI labs, who have an unmatched understanding of the technology. If a lighter touch guarantees us a seat at the table, I believe that will lead to safer outcomes.
There is also a third option. Rather than leaving our invitation up to chance, why not build the table ourselves?
There are growing calls for an international body to foster cooperation between the AI labs and other stakeholders. The ideal model would be a CERN-like entity for AI, a global institution where the best and brightest minds working on AI could work together and cooperate, sharing best practices and pushing forward the capabilities of AI in a safe and secure environment.
Reaching that goal is undoubtedly challenging, but there are smaller, feasible steps we could take today. We could set up coworking spaces for AI safety and capabilities researchers across various labs and research institutes. Many excellent governance proposals lie downstream of strong dialogue and trust between the people working at these big AI labs, so it makes sense to start building that today. This should also be paired with funding commitments for more AI safety researchers. For the Government to be a major international player in AI governance, this would demonstrate commitment and leadership. Besides, given the budgets earmarked for AI, a few more offices and safety researchers would be pennies on the dollar.
The limitations of cooperation
Cooperation has its limits. Each new player diminishes the power of cooperation, and as AI continues to improve, computational costs for powerful models will decrease, drawing in more actors. That’s why regulation will eventually become essential for potent AI models. There is a level of computational power that would be too dangerous to give to the public. At that point, we will have to have serious conversations about how we limit access to powerful computing.
For now, the public and foreign actors lag behind the labs. As these powers propagate, we should cooperate with the labs and evaluate frontier model capabilities, and respond in time. The world has woken up to AI’s existential threat. Let’s do what humanity does best, and work together to solve it.
Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.
CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.