25 October 2018

AI is not a silver bullet for cyber security

By

The number of criminal cyberattacks keep rising and we are still fairly in the dark when it comes to how best to tackle them. AI is often seen as the holy grail that will save us from hackers and ransomware. But will it actually be able to improve cyber security?

Last month, a North Korean spy was charged by the US Justice Department for helping to perpetrate the cyberattacks against the NHS in 2017. The WannaCry attack affected more than 150 countries, including Spanish telecoms and German rail networks. In total, 47 NHS trusts were attacked in the UK. Following this worldwide hack, operations had to be cancelled, ambulances diverted and patient records made unavailable.

Little has been learnt. Since the attack, every NHS trust that has been tested against cyber security standards has failed, although, to be fair, defending oneself in cyberspace is not easy. There is a famous cyber imbalance in favour of the attacker, who needs only to succeed once while the defence needs to remain perfect at all times. AI can help balance this by dramatically improving cyber defence capabilities.

With a combination of data, computing power and algorithms, artificial intelligence can help detect irregular changes and — far quicker than the human eye – spot and address errors and vulnerabilities within the system. The hackers’ opportunities to exploit these vulnerabilities, the so-called attack surface, would diminish drastically. The massive shortage of skilled cyber workers makes artificial intelligence even more appealing.

In fact, this is already happening. In 2016, the American Defence Advanced Research Projects Agency launched a ‘Cyber Grand Challenge’, a competition to create automatic defensive systems “capable of reasoning about flaws, formulating patches and deploying them on a network in real time”, that is, systems able to self-heal.

In a recent IBM and Ponemon Institute study, about half of companies surveyed are now deploying some kind of cyber security automation, with a further 38 per cent planning to deploy a system within the next year. The study also found that organisations that had extensively deployed automated security technologies saved over $1.5 million on the total cost of a breach.

AI can also help reduce the risks that the prevalence of code reuse bring. Perhaps surprisingly, coders do not always write their own code from scratch. Instead, coders often build on existing codes to create new software, which is very efficient, but also very risky. As no one can be bothered to audit million lines of code, no one actually knows who wrote the code and is accountable for the integration of that reused code, which can therefore be exploited by hackers. Through machine learning, artificial intelligence can learn to assess and identify these types of code errors within seconds.

Yet AI will not solve all the challenges cyberattacks pose and some claim that the technology cannot yet cope with adversarial situations. Machines are certainly better at monitoring large amounts of data, but human supervision is still required when dealing with cyber security alerts. Hackers may also take advantage of the technology to add sophistication to their attacks. AI could for instance be used to gather data for so-called ‘spear phishing’ campaigns, where individuals are targeted with highly personalised fake emails.

Many products being rolled out at the moment involve ‘supervised learning’, which requires firms to choose and label data sets that algorithms are trained on, for example by tagging code that is malware and code that is clean. Rushing to get their products to market, companies risk using training information that has not been thoroughly scrubbed of anomalous data points. That could lead to the algorithm missing some attacks. Another is that hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code.

AI is not a silver bullet, but it can help address the imbalance that currently favours the attacker in cyberspace and greatly enhance cyber security defences. At the same time, it is vital that security firms monitor and minimise the risks associated with algorithmic models.The technology can reduce the attack surfaces by identifying code errors and system vulnerabilities within seconds and with disastrous attacks on the rise, our systems certainly need it.

Sofia Svensson is a CapX intern and a freelance writer.