Today’s hospitals are not sacrosanct. The large amounts of patient and staff data that they collect and store can make them prime targets for cyber attacks.
There is currently a lack of information on the link between hospital cyber breaches and patient mortality, but it may be the greatest danger is not from the hackers themselves. A US study published in September shows a correlation between the “improved” cyber security policies employed by hospitals after a hack, and the probability that patients will die within a month of experiencing a heart attack. The road to hell is paved, it seems, with good intentions.
Malware attacks on hospitals most often occur as collateral damage when uncontrolled extortive campaigns rampage across the internet. This was what happened during the massive disruption of the UK’s National Health Service (NHS) by the WannaCry ransomware in 2016.
In the wake of this disruption, heightened security measures were put in place to give patients and doctors a sense of anticipated protection as well as to stymie future breaches. However, the study shows the same features that make hospitals effective victims of cyber extortion also mean that new security measures can be counterproductive.
After WannaCry, for example, independent British healthcare charity the Nuffield Trust found that there was “confusion” about the ongoing threat, leading to all electronic systems being regarded as “mission-critical” and requiring hourly updates. This was not the case. But without investment across the system in healthcare-specific security infrastructure, the NHS resorts to more demonstrative options, such as an overindulgence in flashy technologies and generic policies that may do more harm than good.
This is exactly the approach that the researchers at Vanderbilt University and the University of Central Florida have warned against. Their work found a correlation between a hospital experiencing a data breach and an increased likelihood of its heart attack patients dying within 30 days, with possibly life-threatening delays in electrocardiograms continuing for at least three years following the breach.
This is not a direct consequence of attacker activity, but the result of the need for healthcare staff to adjust to new security controls. Doctors may be used to time-saving shortcuts, such as sharing passwords with colleagues, but these tricks are swiftly curtailed when the fallout from a real hack makes security a hospital-wide priority. A consultant with over 20 years’ experience in the NHS, who preferred to remain anonymous, commented that “now passwords have to change all the time, and we spend ages trying to remember how to get into the records”. Even straightforward security provisions appear to interfere with patient care.
She also commented that “I don’t write electronic notes without also completing a paper trail [because] patients lost faith in our system, regardless of security updates. This delays my clinic; patients become tired and frustrated, as do we.”
The study also reported that hospitals that experienced data breaches recorded a 0.36 per cent rise in the number of patient fatalities that occurred within 30 days of a heart attack for at least three years after the cyber attack. According to the British Heart Foundation, in the UK alone, one heart attack patient is admitted to hospital every five minutes. In the US, the American Heart Association estimates that there is one heart attack every 40 seconds. A 0.36 per cent increase in fatalities could therefore mean 2,840 lives lost per year in the US alone.
Heart attacks are only the first measurable example of the contradictions that can emerge between security measures and patient welfare. Cyber security protocols could also threaten patients’ emotional and physical health in other ways; a simple example is increased control over wi-fi, which could inhibit the ability of vulnerable patients to communicate with family members while hospitalised.
Hospitals and health ministers are understandably cagey about policies that might inadvertently increase patient mortality, so uncovering information about other side-effects of an apparently “one size of security strategy fits all sectors” global policy is challenging. But a former legal adviser to an NHS trust, who also declined to be named, said that “looking any deeper into the unintended results of cyber resilience ‘solutions’ that are used across the board would undermine the security of the whole healthcare sector, for which individual hospital boards could be held liable – so there is no incentive to do so”.
In business, the need to engage with well-intentioned but bureaucratic data protection precautions can be irritating, but ultimately it is worth the opportunity cost. In the frenetic atmosphere of a hospital, this may not be true. What is certain, however, is that security benefits no one if it is not designed for its environment.
Anjuli RK Shere is a cyber security doctoral researcher at the University of Oxford