My colleague, Amy Foust, and I were recently discussing steps that medical device manufacturers can take to limit liability before a system security breach occurs. As if medical device companies didn’t have enough to worry about, the news this year has been replete with stories about software vulnerabilities and successful hacks of medical devices with integrated operating software. In April, the MIT Technology Review reported a simulated attack on a telesurgery robot that interrupted the function of the robot and prevented a full system reboot, leaving the simulated operators unable to resume control of the system.  In June, COMPUTERWORLD reported on real cyberattacks on hospitals that led to unmonitored, unauthorized export of medical data.  Worse, analysis showed that the hackers, had they chosen to, could have modified data stored in or reported from various medical and diagnostic devices.  In August, CBS News reported that federal officials and device manufacturer Hospira recommended that hospitals discontinue use of Symbiq® infusion pumps after it was discovered that drug delivery parameters could be modified remotely, without the need for a username or password.

With at least one device being pulled from use because of security concerns, and completed hacking attempts documented at multiple healthcare facilities, medical device companies must consider cybersecurity a critical issue. At the same time, the increased use of electronic medical records, telemedicine, and other high-tech clinical practices demand the flexibility, accuracy, and ease-of-use that come with software—and network—enabled devices.

From a legal perspective, a manufacturer will want to be able to show that it took all reasonable security precautions.  For medical devices, design history files can include an assessment of the design inputs for a device.  Design inputs assessed using 5-Whys or similar inquiries can be distilled to help identify communication capabilities that are essential to the proper function of the device, without including more vulnerabilities than are necessary.

Tools like Failure Modes and Effects Analysis (FMEA) can be used to conduct a disciplined and well-documented review of the security concerns particular to a specific device.  FMEA can help an engineering team identify whether and when, for a specific device, encryption and authentication protocols might be insufficient.  If a company is later sued for failing to adopt a particular safeguard, an FMEA analysis of the software security may help explain why that safeguard was not considered necessary at the time the device was built.  An FMEA may also help the team assess possible drawbacks to certain security features, such as potential delays in resetting equipment during a medical emergency.  It is important that cybersecurity concerns not impair the use of the device in clinical environments, some of which can be chaotic at times.

Both manufacturers and healthcare providers would be wise to track the security protocols used by certain devices, and inquire into the feasibility of software security updates or device obsolescence when the security protocols are no longer current.  Manufacturers who do not plan to provide updates (which are, after all, expensive considering the design, validation, and implementation requirements) may nonetheless wish to provide alerts to original purchasers, who can then elect to discontinue use of less-secure devices or adopt site-specific protections to compensate for any outdated protocols.