Cybersecurity has been an issue over the last few years but, to date, it has generally involved stealing data or denial of service. Whilst these types of attacks are serious, the advent of AI raises a whole new danger from malicious actors.

As we come to rely more on AI to, amongst other things, drive our cars or fly our planes, we expose ourselves to much more immediate and dramatic risks than inconvenience and embarrassment. Cyber-attacks on AI systems could pose the risk of death or injury in a way that has never been seen before. This, inevitably, also has an impact on insurers and the depth and breadth of the cover they can feasibly provide.

Consider, for example, a hacker manages to get into a plane’s AI system. The plane may be forced to go to an alternative destination rendering passengers and crew subject to potential kidnap and ransom, or worse, made to deliberately crash. If one plane’s AI system is compromised, how many more in the same fleet may be vulnerable?

Equally dramatic is the possibility of driverless cars being compromised. Currently, driverless cars are programmed to avoid pedestrians. It is not inconceivable that a hacker can re-programme them to do the opposite and to disable the manual override. With the recent spate of terror-related incidents where vehicles were used to act as weapons against innocent pedestrians, one hacker could potentially do far more damage than several terrorist organisations put together.

These risks are also recognised in a research paper published in February 2018 following a study carried out by various institutions. The paper goes further by recognising the risk of AI used in automatic weapon systems being hacked. This is a very tangible risk. Several individuals have over the years managed to hack into military-grade security.

In 2001, Gary McKinnon hacked into 97 US military and NASA systems over a 13-month period. Between 2012 and 2013, Lauri Love, a young autistic man in the UK, hacked into the Pentagon, Federal Reserve and defence department. More recently in 2015/16, Kane Gamble, a 15-year-old, got into the systems of US spy agencies.

At this point, we might expect to explore the relationship between AI cover and cyber cover, possible limits on liability and considerations in risk assessment and pricing. The problem with that, however, is that the risk is relatively new. It is not yet fully understood and lacks a meaningful body of statistics for insurers to accurately price. Warren Buffett suggested in 2017 that cyber risk was a bigger threat to humanity than nuclear weapons. This year, he explained to shareholders that he didn’t want much exposure to cybersecurity threats, adding:

‘I think anybody that tells you now they think they know in some actuarial way either what [the] general experience is like in the future, or what the worst case can be, is kidding themselves.’

Warren Buffett demonstrates the wisdom of experience but that leaves the industry with a dilemma.  Do we all avoid the risks or do we have a wider obligation to provide some cover for our clients? Insurance is, after all, the oil that allows business and industry to operate.

It may be time for the industry to get together and decide what the range of risks may be, what we can cover and to what extent, all the ways we can manage the risk and how the risk can be spread between clients (both the developers and adopters of cyber products), governments, insurers and reinsurers. This could be an opportunity to get ahead of the risk rather than thinking about these issues after an event.