6 August 2018

Planning an AI future is a fool’s errand

By

Quite the most amusing part of the current debate about Artificial Intelligence is the manner in which we are recapitulating two of the big mistakes of the 20th century. These are the Socialist Calculation delusion and what we might call the New Soviet Man delusion.

An example of the Socialist Calculation fallacy comes to us most vocally from Ali Rahimi, who argues that AIs are being built by alchemy rather than science. In this version of events, the machines are just bodge jobs, processing lots of data to see what fits. As he says “I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on alchemy.”

This sounds reasonable enough in some contexts, after all I’ve always preferred my nuclear plants to be built by those who know what they’re doing. It’s a less salient critique when it comes to societies and economies, however — as Hayek and Mises famously pointed out, some things are just too complex for us to be able to grasp in such a thorough manner.

Indeed, throughout a developed economy we used bodge jobs of processed data without quite grasping the detailed processes at work. We don’t know how many apples will be eaten next year so we leave it to market processes. As Hayek insisted, these are the only computing engine we have capable of doing the data processing to produce useful information.

We also see this same problem in various well-meaning editorials. AI is going to change the world, we don’t know how, therefore we must plan! Direct! Channel those forces! But if we don’t know how a technology is going to change the world, planning is simply not possible.

As the Guardian quite rightly asks, who could have known that the 19th century switch from whale oil to kerosene would ultimately lead to the development of plastics? And what plan started in 1880 would have given us a world either with or without plastics? None, clearly, for no one even knew of the possibility. The same is true when we try to work out what effects AI will have in decades to come — in both cases, total ignorance is not a good basis for crafting a plan.

The other error is what I would call the New Soviet Man problem. This describes the idea that while the joys of socialism didn’t suit actual human beings too well, Soviet government would eventually create a whole new kind of human who would absolutely love it. Of course, homo sovieticus never did quite materialise.

This brings us to another common argument about AI — that it should not incorporate the things we know about actual human beings.

For example, we know that some to many humans are racist, misogynist, greedy and short-termist. AI too can pick up those foibles, and can definitely show what we would call prejudice.

Insisting they do not is to miss the point entirely. The only possible use of AIs is to provide us with knowledge about the world we live in, knowledge we cannot derive purely from logic but which can only be gained through data processing.

After all, the world is full of deeply prejudiced human beings. An AI which didn’t account for that would have little value in describing our world. That’s why we should not just want, but must absolutely insist that AIs do incorporate our errors.

The New Soviet Man mistake would be to try to design AIs for a world free of humans with all their messy, illogical behaviour. It is also, of course, an argument against the various alternatives to free market capitalism. Sure, if humans didn’t respond to incentives then a rigidly enforced equality of outcome might work just fine. In the real world, incentives are important and any system which doesn’t allow for a degree of inequality arising from application or effort isn’t going to work. The AI mistake is subtly different but based upon the same underlying error.

What all this means is we can train the AIs on data and see what happens, rather than trying to understand the complex interactions, something we can’t do anyway. If the last century has taught us anything, it’s to avoid those two key errors: trying to plan complex systems and assuming a world shorn of us humans and all our messy habits. The AI pioneers would do well to bear those lessons in mind.

Tim Worstall works at the Adam Smith Institute and the Continental Telegraph.