25 October 2021

The data dilemma

By Richard Mollet

Over nine months after formally departing from the EU, the UK government is turning its attention to reshaping British data policy. On September 10, it published the consultation ‘Data: a new direction’, setting out how the data protection regime would be geared towards supporting competition, driving economic growth, and helping innovation. All this is to be done whilst following the over-arching strategic principle to ‘maintain high data protections standards without creating unnecessary barriers to responsible data use’.

But within those apparently simple lines lie the crux of the difficult challenge facing policymakers and businesses across the world in this field. A trade-off has indeed to be made between protective standards and barriers to use, however not only are those terms highly subjective (one person’s defensive wall is another person’s obstacle), but also people’s views on them can change dramatically according to context. Depending on the service, we can all oscillate between thinking the ability of a digital service to identify us is either impressively cool, or sinisterly creepy.

It is not immediately obvious which side legislators in particular should take in this debate. It is certainly not a simple question of thinking that the business community wants lower barriers whilst citizen-consumers want higher ones – the reality is more nuanced than this. Companies like RELX working in this sector understand and deeply value citizens’ rights and concerns about data usage.  

Whilst we might not have the answers to these conundrums, we can offer up some useful perspectives. So what policy approach can deliver good outcomes both for those who prioritise access and those who demand protection?

First, it needs to begin with transparency. It is vital that the citizen-consumer knows precisely when they are making a choice about letting their data hurdle a protective wall – and what other barriers are also thereby being negotiated. With knowledge, comes trust and consent. For example, it is now commonplace that websites advise people on when tracking technologies like cookies are in use, and with one further click it is possible to read and understand the difference between the ‘strictly necessary’ ones which keep the site operational, through to those being used for behavioural advertising. The user can choose between these with relative ease and adjust to personal taste.  

Secondly, just as important is the why: users need to know what the benefit to them is of putting their personal data into someone else’s hands. Legislators as much as companies should be much clearer about what the advantages of data-sharing are, and work to rebut those negative cynics who portray any attempt to gather data simply as an example of ‘big brother’ or ‘surveillance capitalism’.

For example, if a scientific researcher wants a really effective search tool that can anticipate their interests and make recommendations for reading, then they need to agree to provide a modicum of personal data to allow that. Or if a consumer wants a really secure subscription service, one that can quickly recognise when it is not them using the account, then again some personal data must be ceded to the provider.  

The same is true at a society scale when trying to tackle crime and terrorism. Governments may decide to allow some of its population’s collective personal data to be analysed in order that social problems, such as tax fraud, can be identified and rooted out. The Head of the UK’s security service MI5 Ken McCallum recently talked about not ‘prioritising privacy over security’ neatly encapsulating the tension at play here. The trick is to identify the equilibrium point where surrendering the right amount of privacy yields the optimum level of protection.

Thirdly, policy-makers need to be clearer about the benefits of aggregation and big data. Not all uses of data are about personalisation. Despite the frequent charge that people should not be treated ‘just as a statistic’, actually often that is the best thing to do.  Designing public services, be they roads, schools or hospitals, can only be done effectively by thinking about crowds of people, not individuals. Neither governments nor pharmaceutical companies could have begun to tackle Covid-19 without big data, and without being able to deal with people as populations. Although we are all ultimately individuals it also true that we act collectively. It detracts nothing from our human dignity – one can argue it contributes to it – to recognise that we tend to act as a crowd and that our aggregated behaviours and characteristics are to some extent predictable and can be the basis for decision-making.

Data policy is difficult because society’s views on what is required of legislation are inconsistent.  On the one hand we yearn for individuality, privacy and security; but on the other hand we enjoy anonymity, personalised services and freedom from surveillance. Legislation has to travel this high-wire to ensure that data policy is neither a gateway to intrusion nor a rampart against innovation.

As the UK government continues its consultation, it will have to recognise that these tensions exist and create policy which has the openness, flexibility, and sensitivity to accommodate the range of attitudes to data protection.  

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Richard Mollet is Head of European Government Affairs at RELX.

Columns are the author's own opinion and do not necessarily reflect the views of CapX.