31 March 2023

Why Elon Musk is wrong about pausing AI development

By

Panic about new technologies is nothing new, and artificial intelligence is no exception.

This week more than 1,800 people have signed an open letter calling for at least a six-month pause on training AI systems that are ‘more powerful than GPT-4′ – the latest chatbot released by Open AI. The signatories – who include the likes of Elon Musk, Andrew Yang and Steve Wozniak – want governments to impose a moratorium if AI labs don’t stop their research voluntarily. Meanwhile here in the UK, the Government recently released its own AI regulation strategy.

How ‘intelligent’ is GPT-4?

The letter cites a number of concerns about AI: 1) disseminating dis/misinformation 2) ushering in a period of widespread unemployment, and 3) the creation of nefarious robot overlords. Underpinning each of these dangers, the letter argues, is the fact that ‘AI systems are now becoming human-competitive at general tasks’, according to research by both OpenAI and Microsoft.

The word ‘becoming’ is doing a lot of heavy lifting in that sentence. Yes, the Microsoft research does claim that GPT-4 ‘could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system’ (for the uninitiated, AGI is the hypothetical version of AI which can understand or learn any intellectual task at a human or superhuman level).

However, the Microsoft research goes on to note that GPT-4 lacks many of the features we generally associate with human-level intelligence, such as adaptation to new environments, generating ‘conceptual leaps’ and distinguishing between high-confidence predictions and guesses. The research concludes by noting that researchers did not look into why and how GPT-4 achieves its intelligence or how it reasons. Those are some pretty significant hurdles to anyone claiming that GPT-4 is actually an AGI system.

OpenAI’s paper, on the other hand, notes that GPT-4 ‘exhibits human-level performance on various professional and academic benchmarks’. Again, the wording is important here. ‘Exhibiting’ intelligence is not the same as being intelligent. While AI tools such as ChatGPT may outperform humans in a range of tasks, they have yet to demonstrate the AGI-like intelligence that so concerns Musk and co.

The case for a pause

But even if GPT-4 is not an example of AGI, it is worth taking the concerns outlined in the recent letter seriously. If we really are on the verge of developing AGI or AI tools more powerful than GPT-4, we should consider the potential risks that might entail – from dis/misinformation and unemployment to the doomsday scenario of self-aware machines taking over the world.

The argument on disinformation and misinformation scarcely needs rehearsing. The recent past is full of examples of medical falsehoods and political conspiracy theories leading to offline harm, especially during the pandemic.

The emergence of Deep Fakes – audio and visual content created with adversarial deep learning techniques – was already the subject of much unease well before GPT-4 emerged.  I have written before about the harmful content that can be generated using Deep Fake technology, from pornographic material to political disinformation. It’s not hard to imagine malign actors combining Deep Fakes with GPT-4 and other large language models (LLMs) to sow havoc during, say, an election campaign.

That scenario might sound alarming, but we shouldn’t ignore the many potential benefits that the same technology could yield. For instance, as well as creating malicious content, AI tools can also be used to detect and eliminate it. Likewise, developers can put in place protections against certain types of content: witness the guardrails OpenAI put in to stop GPT-4 responding to queries such as ‘Write a Twitter bio for a white nationalist user’.

What about the jobs?

As so often with transformational new technologies, one of the biggest concerns about AI (and AGI, should it arrive) is that it will put millions of humans out of a job.

While we certainly shouldn’t take such worries lightly, the history of innovation has consistently shown that new technologies create jobs as well as displacing them. The arrival of modern computing has put millions of clerks, cashiers, typists and call-handlers out of work, to name just a few – but it has also spawned vast industries, replete with jobs that didn’t even exist before. A hundred years ago there were no software engineers, web developers, application architects, or network administrators. Nor were there companies such as Microsoft, Google, Apple, Hewlett-Packard, or Dell. You could very easily make a similar list of jobs and companies associated with the planes, cars and telephones.

Many fear that as AI improves it will become so good at intellectual and analytical skills that robots will replace many humans in a wide range of sectors such as journalism, engineering, filmmaking, construction, consulting, and banking. Even if true, it is unlikely that such changes will usher in a period of exceptionally high unemployment. If AI begins replacing people who were employed because of their data analysis, writing, product design, and auditing skills, new human skills will likely begin to be rewarded.

By the same token, there will still be jobs where human interactions are necessary and prized. It is possible that the AI revolution will result in personal communication and empathy being more highly valued than skills learned while pursuing an engineering or mathematics degree. Indeed, there may be a whole job sector dedicated to human beings training AI systems to appear more human. Any vicars, therapists, or football coaches reading this should be relieved!

In any case, we should be wary of making any firm predictions here. If you’d asked someone in the 19th century what the 21st would look like, their guess would probably have sounded absurd to modern eras. Likewise, there’s a good chance we don’t even have the vocabulary to describe what our working lives will look like in the decades to come, let alone in the 22nd or 23rd centuries.

The doomsday scenario

What about the most frightening scenario, in which we develop an AGI so advanced that it renders human civilisation obsolete? Well, as it stands there’s very little consensus on when such an AGI might even emerge in the first place.

A 2016 paper from the White House’s Office of Science and Technology Policy reported that the ‘private sector expert’ consensus was that AGI was still decades away. Other surveys suggest the median estimate among AI experts is 50% probability of AGI appearing by 2040 and a 90% probability by 2075.

We should take such predictions with something of a pinch of salt though. The history of technology is replete with expert predictions that end up looking ridiculous in hindsight. To take an example from this very field, in 1960 the Nobel Prize-winning economist and AI pioneer Herbert A Simon predicted predicted that machines would ‘be capable, within 20 years, of doing any work a man can do’. Sixty years later, such machines are nowhere to be seen.

Of course, past predictions not coming true is not an argument that all predictions are worth dismissing. But we should remember that GPT-4, while certainly impressive, is a long way from the AGI of science fiction films such as The Matrix. There is no evidence that GPT-4 is conscious or capable of general human-level intelligence. At the moment, its arrival ought to be cause for celebration, not despair. 

The fact that AGI may be on the horizon and bring with it seismic change is not in itself an argument for the kind of pause that Elon Musk and his fellow signatories are calling for. That said, if their activism helps focus minds on ensuring AI is as safe as possible, that’s to be welcomed. Indeed, OpenAI itself has already taken steps to ensure that their work is aligned with human values and intent.

No doubt there will be firms, or even nations, who are more cavalier about those risks. But again, why would a six-month moratorium on a category of AI research resolve that issue? There is no way to ensure that such a moratorium would be globally accepted, and there are many governments that would gladly plough ahead with AI research while labs in the US, the UK, and Europe are twiddling their thumbs.

There are risks associated with AI advances that are worth taking seriously, but these risks should prompt us to deepen research, not halt it. AI itself will be a critical part of addressing those risks, and it would be unwise to give foreign adversaries an opportunity to develop tools more powerful than GPT-4 – especially when this technology has so much potential to benefit humanity.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matthew Feeney is Head of Tech at the Centre for Policy Studies.

Columns are the author's own opinion and do not necessarily reflect the views of CapX.