11 January 2019

Is the AI bubble about to burst?

By

Artificial intelligence was the business story of 2018. But here is why I think 2019 is the year the story will unravel. Companies will start to question the cost and effectiveness of their expensive AI programmes. Financial analysts will stop awarding investment ratings based on ‘digital strategy’, and go back to looking at good old-fashioned cash flow.  Journalists will start to lose interest in the machine-driven future. The AI bubble is about to burst.

Perhaps it has already begun. At the beginning of last year the Financial Times was running about 30 stories a month tagged ‘artificial intelligence’. By June that had risen to almost 50. But last month there were only ten stories.

Worldwide patterns of Google searches tell a similar story of artificial intelligence fatigue. Searches for ‘machine learning’ – one of the building block concepts of artificial intelligence – have risen sharply over the last ten years, but they have been falling since late last year. In the UK they have fallen particularly sharply.

What is going on here? Wasn’t artificial intelligence supposed to be the transformative technology of the near future, taking over human work in everything from driving cars to diagnosing cancer (and putting plenty of us out of jobs in the process)? Were not machines with their massive computing power and speed-of-light of operation supposed to solve all the puzzles and problems of business at a stroke? The promise was of exponential change and unlimited enhancement, and soon.

But not so fast. The world of AI as applied to real lives and real businesses is turning out to be a complicated place, where promise and delivery seldom match up.

Take banking. Banking is essentially a data business. Banks crunch data on individuals and companies, and they use those data to calculate risks that tell them whether to engage with customers and if so how to price their services. That has always been a laborious time-consuming business. But now new financial providers – the so-called Fintechs – have sprung up to take advantage of AI-powered automation in everything from regulatory compliance to foreign exchange, cutting out a lot of labour cost along the way. The lure of these services is that they are supposed to be painless: easy sign up, instant acceptance, and low fees.

That is until reality bites back. A few minutes browsing the message boards and review sites will show just what customers think of these services, and it’s not too pretty. Complaints include cards that don’t work, unauthorised payments from accounts, and above all customer service that it would be generous to call skimpy. It turns out that replacing the humans in critical service roles is not as easy as some had hoped.

Medical diagnosis powered by AI ‘expert systems’ such as the oncology and general practice assistants currently in use by healthcare providers in China, Africa, the US and Europe (including the UK) has suffered a similar pushback. While these have been shown to be powerful diagnostic tools with great potential, there have also been frequent complaints from medical professionals that the accuracy of diagnosis is often oversold. The cardinal error of AI boosters is to mistake potential for actual benefits. It is almost inevitable that the next phase of the artificial intelligence revolution will be marked by disappointment and rejection.

This is a familiar pattern, as anyone who lived through the dotcom boom and bust can attest. The business world loves narratives, especially narratives that can be sold to investors, but stories have a way of getting ahead of themselves. It’s time to bring the story back down to earth.

Having spent the last year researching AI implementation for one of the world’s largest corporate consultancies, I can report that one thing we have discovered is that getting to the promised land where AI does the work for us is a hard, hard slog.

The first and probably the most important task is to cut through the confusion over terminology. Machine learning, cognitive computing, deep learning, neural networks – today these are increasingly being treated as interchangeable signifiers of something exciting, futuristic and mysterious. But these terms are not interchangeable. Getting a handle on what words refer to what things is a big step to understanding what AI can and cannot do.

Start where AI started, with machine learning. Most artificial intelligence as used in the real world is based on some form of machine learning, a labour-intensive way of getting computers to act as humans would in closely defined situations.

This is what academic practitioners call ‘weak AI’. These are linear systems that cannot absorb feedback, and need continual supervision. Machines ‘learn’, but only what humans teach them. Such systems are already in use in expert systems such as chatbots, diagnostics and game playing, in robotic process applications like audits, claims processing, billing and record keeping, as well as broader machine learning applications such as statistical analysis, fraud detection and the algorithms that shape social media feeds.

Then there is ‘strong AI’, or cognitive computing. The first thing to grasp about cognitive computing is that it exists in theory but not in practice. The standard definition of cognitive computing is that it is adaptive, interactive (with humans and other machines), iterative (it asks questions), and contextual (it understands more than face value). It can potentially pass the Turing Test of whether machine responses are indistinguishable from human responses. But it belongs in the future.

Where things get really confusing is that many of what are likely to be component parts of implementable cognitive computing do exist. Broadly these are in the category of ‘deep learning’ systems, which can absorb feedback and can in some cases learn without training. Deep learning includes the recognition and evaluation of visual objects, and how to play games without explicit instruction in the rules (something that, as Wittgenstein pointed out, any average child can do). An allied application is ‘natural language processing’ that interprets the context as well as the face value of utterances.

These are exciting developments, but they have yet to be integrated into anything like true cognitive computing, and may never be. They are energy hungry, dependent on vast stores of correctly categorised data, and currently they are easily fooled by anyone with a mind to do so. Indeed, it is quite likely that the idea of ‘true’ cognitive computing is a blind alley, and that AI will remain a category of evolving, competing technologies that will never be integrated into a single entity.

This gap between useable machine intelligence and theoretical cognitive computing is one that AI’s boosters have exploited. Many businesses end up buying into the idea that there is an easy way to replace all the sophisticated things that well-paid humans do with a cheaper machine replacement. They will be disappointed.

So what happens next? The dotcom experience probably points the way. First there is a gradual but accelerating adoption of a new set of technologies. Then, a bubble of frenzied interest, all hot capital and exaggerated claims, followed by a sudden bust. We may be at that tipping point now.

Finally the dust settles. And it is then – just as with the internet – that the real long boom begins. The winners will be those who have taken the trouble to understand the real nature and challenge of AI.

Richard Walker is a journalist and adviser to financial companies