26 September 2023

Why ‘cyberpsychology’ is the password to tackling online crime


A couple of years ago, I wrote a piece for CapX on how cybercrime really works, and how human error is at the root of most cyber attacks. In the intervening time we have had Covid, a massive increase in internet use, and a correspondingly massive spike in cybercrime.

And for once, ‘massive’ is the right word. No one really knows the true cost of cybercrime, although one widely-cited estimate puts it at around $8trn this year, which is roughly double the entire GDP of Japan. Even if only a tenth of the cybercrime bill was made up of ransom paid to hackers, that would still make it the second biggest single business sector in the world, just behind global telecoms.

And despite the vast sums that are spent by companies and governments on counteracting cyber attacks, the cybercrime wave just seems to swell and accelerate – a permanent dark side of the online world that seems both inevitable and undefeatable.

As a result, cyber-pessimism has seeped into institutions. A common attitude in the cybersecurity world is that getting hacked is not a matter of ‘if’ but ‘when’, and more recently this has morphed into the even more defeatist ‘it’s not when but how often’.

Is that pessimism justified? It may be that the cyber defence experts are missing something obvious. Most cyber attacks depend more on human behaviour than on technology as their success factor, and attacks that exploit human psychology demand a psychological defence.

Welcome to the emerging realm of ‘cyberpsychology’. It already exists, albeit on the fringes of the cybersecurity world, and arguably it demands more attention (and more budget) from those with the most to lose from cybercrime. Cyberpsychology recognizes that the most used channel for cybercrime – the so-called ‘attack vector’ – is not based on digital technology or network design flaws but on the usually unwitting behaviour of people inside the organisation.

In fact it has long been known that human error is the common factor in the majority of successful cyber attacks, and that is only confirmed by the latest data. The most comprehensive annual account of the types of cyber attacks recorded worldwide is the Verizon Data Breach Investigations Report, and the 2023 report based on over 16,000 ‘security incidents’ calculates that in at least 74% of data breaches, human error was to blame.

In this context human error can mean many things, although the use of stolen access credentials or ‘social engineering’ (essentially persuading someone to open a door that should be locked) are the most common. There are some forms of cyberattack that do not demand a human element, such as ‘network intrusion’ or ‘web application attacks’, although in practice these too may begin with a human error, which may be as simple as putting a memory stick of unknown provenance into a computer USB port.

If these varieties of cyber attack and the psychological patterns that underlie them seem arcane, it is worth remembering that cybercrime is ubiquitous, it affects everyone whether directly or indirectly, and it happens every day, every minute. So it is unsurprising that the list of organisations seriously damaged by cybercrime in 2023 alone is very long, and includes among thousands of cases many universities, government agencies including health authorities, along with Deutsche Bank, ChatGPT, UPS, Reddit, T-Mobile, Uber, Western Digital, AT&T, Pepsi, Paypal, the social media menace formerly known as Twitter as well as cybersecurity specialists Verizon and even somewhat ironically BreachForums which is a large marketplace for, yes, hacked data.

According to IBM the average cost to an organisation of an individual data breach is now $4.4m. That cost eventually passes to the wider economy: in its latest annual ‘Cost of a Data Breach’ survey, IBM also says that over half of companies have increased prices as a result of data breaches. If psychology has anything to contribute when it comes to limiting those costs, perhaps businesses and governments should be paying attention.

The challenge for cyberpsychology is the challenge of shaping human behaviour. The benefit of making people less prone to online error is great, but so is the difficulty of achieving it. As organizations have already learned, when you tell people not to do a thing they quickly become desensitized to the message, and just go on doing it.

But cyberpsychologists do have some tools at their disposal, and they are based on understanding the behavioural weaknesses that cybercriminals exploit.

For the cyberpsychologist the biggest and brightest red flag in any organization is likely to be the prevalence of low expectations. Low expectations are the hacker’s friend. It has long been recognized that low expectations are correlated with poor performance (it is sometimes known as the Pygmalion Effect, after a 1969 management study on the subject).

The prevalence of ‘it’s not when but how often’ in cybersecurity thinking is a classic case of low expectations. The cyberpsychologist’s task is to work out how to set expectations that are high but achievable (the Pygmalion Effect also tells us that unattainable goals are as bad as low expectations).

One way of raising expectations and improving cybersecurity is to use the psychological principle of social proof. If people see other people they trust doing something, they are much more likely to do it themselves. Social proof is often used by hackers to convince people that others have already done whatever it is the hacker wants, but social proof works both ways: studies have shown that when people are shown evidence of peers using cyber defence routines, they are much more likely to do the same.

The cyberpsychologist might also look at the complexity of organisations and their cyber defences, and how to simplify these defences. In cybersecurity complexity spells trouble, because complex systems usually cause people to develop ad hoc ‘workarounds’, and workarounds are untested and not secure. Complex passwords are a case in point: we are frequently told that a password has to be long, both upper and lower case and including non-standard characters because that is hard to crack. But it is also hard to use so people end up writing the passwords down or storing them in insecure files. In practice a password of three short random words is long enough and strong enough for most purposes.

These approaches are only the beginning for cyberpsychology – but at least they are a beginning. If organisations are going to get to grips with the extraordinary threat that cybercrime now poses, they are also going to need to move beyond today’s almost total reliance on technology-as-defence.

There is not much sign of that happening. For example the latest annual review from the National Cyber Security Centre is supposed to be the UK’s definitive official response to cyber threats and defence, but neither the words ‘human’ nor ‘psychology’ appear in it.

As wake up calls go this one is pretty simple. If three quarters of an $8trn problem is caused by human error and psychological exploits, why wouldn’t you spend three quarters of your defence budget (currently estimated at a paltry $160bn a year) on human-based psychological defence?

That could be the password to saving an awful lot of time, money and grief.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Richard Walker is a journalist and communications adviser to financial companies.

Columns are the author's own opinion and do not necessarily reflect the views of CapX.