Support 100 years of independent journalism.

  1. Spotlight
9 December 2020

How AI changed cyber security

Artificial intelligence has enabled new ways to attack systems, but also to defend them.

By Laurie Clarke

Cyber attacks are predicted to cost the world $10.5trn each year by 2025. It’s a trend that has accelerated during the Covid-19 pandemic. At the height of the crisis, Kaspersky reported soaring remote desktop protocol attacks on home workers, while IT security company Barracuda Networks found that Covid-related email scams leapt by as much as 667 per cent.

Read more: Prospering from a pandemic: How cyber criminals and nation-state actors are exploiting Covid-19

“Artificial Intelligence is playing an increasingly significant role in cyber security,” says Kevin Curran, a researcher at Ulster University. According to a study last year by Capgemini, a consulting and services group, 61 per cent of firms say they cannot detect breach attempts without using AI tools.

AI has a range of applications in cyber security, including network security, fraud detection, malware detection, and user or machine behavioural analysis. Its most popular application is in network security, where “the huge dimensionality and heterogeneous nature of network data” as well as the “dynamic nature of threats”, make it extremely useful, Curran says. “AI can use statistics, artificial intelligence, and pattern recognition to discover previously unknown, valid patterns and relationships in large data sets, which are useful for finding attacks.”

AI and machine learning systems can scan an organisation’s information systems to preemptively discover vulnerabilities. These AI networkmonitoring tools can detect and fix more irregularities than humanly possible by processing all of an organisation’s relevant data to create a picture of the “baseline” threat level. Any significant deviation from this baseline will result in the model flagging the activity as suspicious.

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. A weekly newsletter helping you fit together the pieces of the global economic slowdown. The New Statesman’s global affairs newsletter, every Monday and Friday. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s weekly environment email on the politics, business and culture of the climate and nature crises - in your inbox every Thursday. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A newsletter showcasing the finest writing from the ideas section and the NS archive, covering political ideas, philosophy, criticism and intellectual history - sent every Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.

As such, AI can be important for reducing “zero-day” attacks – where hackers take advantage of vulnerabilities in the software before developers can fix it. Curran says this market is “flourishing”.

Content from our partners
How to create a responsible form of “buy now, pay later”
“Unions are helping improve conditions for drivers like me”
Transport is the core of levelling up

But AI can also be wielded by malicious actors for large-scale attacks. Malware is increasingly automated by cyber criminals. Conventional cyber security systems flag malware attacks by identifying malicious code, but because attackers often tweak the code, it is difficult for traditional security software to spot it. AI systems can check the code against a vast database, and so they are much more adept at designating it as potentially malicious, even when the malware is incorporated into benign code.

Read more: How AI could kill off democracy

“Building static defense systems for discovered attacks is not enough to protect users,” says Curran. “More sophisticated AI techniques are now needed to discover the embedded and lurking cyber intrusions and cyber intrusion techniques”. But security professionals are quick to caution against AI evangelism. Humans will still be integral to security for the foreseeable future – necessary to fine tune AI models and see if they are working correctly. If not, organisations can be lulled into a false sense of security. And this could lead to far more devastating attacks.

This article originally appeared in a Spotlight report on cyber security. You can download the full edition here.