4 October 2019

The right way to look at facial recognition technology

By

A team stares intently at grainy CCTV footage on a computer screen. They lean in towards the operator. Then they utter a single word: “Enhance.” Suddenly a pixelated blob turns into a high-definition image. This is perhaps one of the worst Hollywood tropes, but thankfully one that is set to die out.

The reason? With the accumulation of billions of images, the development of powerful graphical processing units and the creation of neural networks that can use these assets, facial recognition technology can now recognise faces at speed and scale.

Many of facial recognition’s applications are mundane, such as unlocking phones and tagging photographs. But many are not. In particular, the technology provides a powerful tool to search and identify individuals with a crowd.

The discovery that the developers of the King’s Cross estate had used facial recognition to scan visitors without their knowledge led to a vocal backlash. In September the developers conceded defeat: the cameras are to be switched off.

And so the campaign against facial recognition continues in earnest. Many favour an outright ban. The most high profile example is San Francisco: earlier this year the city banned police from using facial recognition technology in any circumstance.

Campaigners for a ban argue that the dangers posed by facial recognition are such that the technology’s use, in whatever circumstances, lies beyond justification. This fundamentalist approach ignores the realities of such technology and modern life more generally.

Such a view is particularly difficult to justify when one looks at the breadth of its potential applications. There is a radical difference between a person using facial recognition on their private devices, companies using it to furtively track shoppers to better target online advertising, and the police using it to prevent crime in public spaces.

In each of the above examples, context is everything. The first is a mundane trade-off between privacy and convenience to the consumer’s benefit. We make such tradeoffs all the time, and such use should clearly be permitted. The second is an underhand approach with little consumer benefit to outweigh the privacy intrusion. The third requires a balance between privacy and security to be struck, but in striking that balance, proportionate use is justified subject to suitable safeguards.

The point that these three examples illustrate is that when assessing facial recognition, we are able to assess the benefits and challenges and strike a balance accordingly. Facial recognition is not a technology so different in character to all others that it is immune from such an approach.

That is not to say there aren’t challenges. The face is a feature that is difficult to hide and, most importantly, impossible to change. Therefore, the risks of false-positives and racial bias present a particular danger that we must subject to intense scrutiny. The way to mitigate these problems, however, is not to ban, but to focus on ensuring that there are adequate and effective safeguards in place to regulate facial recognition.

Fortunately, in the UK sizeable safeguards against misuse of facial recognition already exist, in the form of data protection laws, codes and guidance. The Divisional Court’s recent decision that South Wales Police’s use of facial recognition was lawful made this clear.

But these safeguards are far from perfect. Indeed, the court in the South Wales Police case argued that they would need to be subject to “periodic re-evaluation”. The question of whether we need new or updated laws to specifically govern facial recognition is where the debate must centre.

And with this in mind, it is important not to forget the technology’s benefits. Facial recognition provides a secure, efficient and scalable way of verifying identity. It turns laborious manual checks into efficient, automated processes that only need human intervention at a later stage. Tasks that would have taken weeks can be done in real-time.

When it comes to detecting and preventing crime the benefits are self-evident, but such benefits are not limited to law enforcement, as anyone who has avoided an almighty queue at Gatwick airport and sped through an eGate instead will attest.

A moderate approach, which focuses on safeguards as opposed to bans, is also firmly in line with public opinion. The Ada Lovelace Institute recently commissioned some excellent polling on this topic.

When polled 71% agreed that “the police should be able to use facial recognition technology on public spaces, provided it helps reduce crime”. On the other hand just 14% agree that “the government should outlaw the use of facial recognition technology so it cannot be used in policing at all”. Any supposed swell of public support for a ban on the use of facial recognition is nothing more than a mirage.

That’s not to say that support is unqualified. 55% are of the view that the government should place limits on police use of facial recognition. And the Institute’s polling also makes clear that people aren’t so keen on facial recognition being used for reasons other than public safety. Just 7% support the use of facial recognition in supermarkets to track shopper behaviour and only 4% support it being used to monitor the actions and behaviours of those who apply for a job.

Therefore, people’s feelings about facial recognition are nuanced and context-dependent, an approach that lies in stark contrast with the current debate. Indeed, on a topic where many of the loudest voices support an outright ban it is the public, not the experts, who are the voices of moderation.

This is the right approach. The call for a total ban on facial recognition treat a complex field in technology as a simple, amorphous blob. If we heed it we risk choosing absolutism over compromise. We will all lose. A moderate approach that recognises facial recognition’s benefits and challenges is instead the right way forward. To find this, we must look to the wisdom of the crowds.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Donate

Recurring Payment

Thanks for your support

Something went wrong

An error occured, but no error message was recieved.

Please try again, or if problems persist, contact us with the above error message. We apologise for the inconvenience.

Aled Maclean-Jones is a barrister at 5RB specialising in media and data protection law.