Today, major healthcare companies are investing heavily into various AI-powered devices. For example, Zimmer Biomet and the New York City-based Hospital for Special Surgery recently inked a three-year deal to create the HSS/Zimmer Biomet Innovation Center for Artificial Intelligence in Robotic Joint Replacement. “The collaboration aims to develop decision support tools—powered by data collection and machine learning — to assist surgeons planning and predicting outcomes for robotic-assisted joint replacements.” Additionally, Johnson & Johnson have gone on record saying that they see “a huge opportunity to harness data, machine learning and artificial intelligence to help drive decision-making at all levels of healthcare.” As artificial intelligence starts playing a larger role in the modern healthcare space, a critical question will need to be answered: Are AI-powered solutions products or services?
This distinction is very important because products are subject to both strict liability and negligence claims, while services are only subject to negligence claims. In a negligence lawsuit, the plaintiff may recover damages if she can prove that defendant’s negligence or recklessness caused her injuries. But in a strict liability lawsuit, a plaintiff may recover damages even if the defendant was not negligent or at fault. In other words, a company may be found liable not because it failed to exercise a certain level of care, but simply because its actions (selling a product) resulted in some kind of harm.
The product versus service distinction is also important because it raises questions about whether programmers can be found to commit some form of malpractice if their AI is deemed inadequate. There is a growing industry selling programmer liability insurance. This is general liability insurance to protect programmers from third-party claims of bodily injury and property damage that may arise out of their work. Such professional liability insurance provides coverage against claims of negligence, errors, or mistakes related to programmers’ professional services. And some companies that sell AI-enabled technology are actually purchasing such coverage for their programmers.
At this point, the law in this area is still underdeveloped. But good analogies can be drawn from cases addressing similar issues in other distinct spaces. Such analogies provide valuable insight into how this question will likely be answered.
Traditionally, software has been deemed a service, not a product, for product liability purposes, and negligence has been the relevant standard. See e.g., Rodgers v. Christie, No. 19-2616, 2020 WL 1079233 (3d Cir. Mar. 6, 2020) (affirming complaint dismissal on the grounds that an algorithmic pretrial risk assessment tool was not a “product” under the New Jersey Products Liability Act because it is not “tangible personal property” nor remotely “analogous to” it).
But when AI is incorporated into physical devices, like surgical robots, there is good reason to believe that product liability theories may apply. For example, in In re Toyota Motor Corp. Unintended Acceleration Marketing, Sales Practices, and Products Liability Litigation, the executor of a driver’s estate sued an automobile manufacturer to recover damages sustained when her vehicle unexpectedly accelerated without her depressing the accelerator pedal. 978 F. Supp. 2d 1053 (C.D. Cal. 2013). Plaintiff asserted several claims, including one for strict liability based on an alleged software defect that caused the unexpected acceleration.
Though this case did not involve AI, many of Plaintiff’s allegations regarding the purported software defect can be directed against AI systems. For example, Plaintiff claimed that the software was “complex” and failed to conform to certain coding standards in its design. It was also alleged that the complexity of the code leads to an increased number of bugs, as well as the inability to fix one bug without introducing a new one. And Plaintiff asserted that certain software bugs can cause memory corruption, which can lead to unpredictable results and software failure that can cause the alleged unexpected acceleration.
This case is important because although it did not specifically involve AI, Plaintiff’s design defect claim survived summary judgment. This strongly suggests that when software—like AI—is integrated into physical machinery, courts will likely treat it as a “product” for product liability purposes.
As time passes and more cases involving AI make their way through our courts, the law will start catching up, and we will gain more clarity into how controversies involving this novel technology fit into our existing product liability framework. Companies that develop and/or use AI should proactively partner with experienced legal counsel to monitor trends and developments in this space, and chart effective legal strategies to eliminate or mitigate their liability exposure.