22 March 2017

Don’t fear the Fourth Industrial Revolution

By Aengus Collins

Any change can be unsettling, but changes as profound as those being unleashed by the current phase of technological development – known as the Fourth Industrial Revolution – are prone to be particularly destabilising. Technology pulls together the various networks that constitute modern life and fuses them into a complex “system of systems”, in which risks become difficult to identify and even more difficult to measure.

We can point to relatively isolated technological risks, such as the risk of a cyber attack interrupting business operations, but in an increasingly interconnected world the consequences can be much greater. Consider, for example, the immediate cascading impact of a cyber attack that knocks out a provider of critical infrastructure. Or the social and political impact of automation and artificial intelligence if they are allowed to reconfigure the world’s labour markets as radically as some have suggested is likely.

Technology is also shaping many of our background assumptions or perceptions, and this in turn can shape our understanding of and attitude towards risks. We live in a world defined by both the accelerating pace of technological change and the uncertainty this speeding up causes.

Anecdotally, at least, more and more of us feel that we are running to stand still – just about keeping pace with some technological developments that affect us but largely oblivious of many others and unsure of how they all fit together.

Amid optimism, anomalies

In an effort to broaden our understanding of technology-related risks, for this year’s World Economic Forum Global Risks Report we supplemented our standard risk perceptions survey with a special technology module. We asked respondents a number of questions about their perceptions of 12 emerging technologies expected to play an important role in digital evolution, from AI and distributed ledger technologies to geoengineering and space technologies. First, however, we asked people to assess the potential benefits and potential negative consequences of each of the technologies. The results are summarised in the first chart below.


Image: World Economic Forum

The first thing to note about these results is how optimistic they are. The average score for benefits (5.6 out on a scale from 1 to 7) is much higher than the average score for negative consequences (3.8).

There are also interesting variations. AI and biotechnology stand out in the top-right quadrant as potentially high-stakes technologies: get them right and we stand to benefit strongly, but get them wrong and we face serious negative consequences. By contrast, and perhaps surprisingly, geoengineering is out on its own in the top-left quadrant, viewed by respondents as presenting us with potential negative consequences that are not matched by potential benefits.

We also asked which technologies needed better governance. The results, summarised in the chart below, were striking, with AI and biotech completely eclipsing the other 10 technologies. On the face of it, this is a curious result. What does it say about perceptions of geoengineering and linked sensors (as in the Internet of Things)? These are perceived as having potentially negative consequences that are broadly similar to those of AI and biotech but without the potential pay-off on the benefits side. And yet they come nowhere close in terms of the perceived need for better governance.


Image: World Economic Forum

The governance challenge

Questions about how best to govern rapidly evolving new technologies are already pressing, and their urgency is certain to intensify in the years ahead. This is one of the key reasons behind the World Economic Forum’s decision to establish its new Center for the Fourth Industrial Revolution, which opens in San Francisco this week.

Our current models and methods of governance are no longer fit for purpose in the face of the changes being unleashed by the Fourth Industrial Revolution, so we must find new ways of doing things. Governance needs to be stable, predictable and transparent enough to build confidence among investors, scientists and society at large. But it also needs to be agile and adaptive enough to accommodate the rapid evolution both of technologies themselves and of the uses we find for them.

Getting governance right will not be a simple matter of weighing the costs and benefits of individual technologies. The depth of the interconnections linking different technologies to each other and to almost every other domain of modern life mean that decisions in this area are subject to huge complexity, uncertainty and ambiguity.

We face an intense and prolonged process of deliberation in order to work out how we want to proceed. That means getting the science right, of course, but it also means grappling with the profound ethical questions that some emerging technologies raise. More than ever, getting to grips with global risks means getting to grips with fundamental questions about what human society could or should look like.

This article was originally published by the World Economic Forum. Read the original article here

Aengus Collins is Practice Lead, Global Risks at the World Economic Forum