Traditional medical malpractice claims focus on whether a medical provider violated the applicable standard of care and, if so, what harm resulted from that violation. This model of understanding malpractice evolved from centuries of hands-on, in-person medical care.
Today, however, medical technology has introduced a new potential party to the situation: a medical algorithm.
As technology has expanded, so has the increased use of algorithms to assist medical decision-making. Although these algorithms, known as “black box medicine,” have outpaced oversights in several areas, they are not foolproof. As a result, attorneys facing cases in which an algorithm made a faulty recommendation are often blazing new trails in medical malpractice litigation as well.
How Do Medical Algorithms Work?
Medical algorithms leverage massive data sets in order to identify patterns and provide information or recommendations accordingly. Because these algorithms can analyze data more quickly than humans, they’re capable of incorporating and adapting far more information than the average human’s decision-making process can.
These types of algorithms aren’t limited to medical use. Rather, they’re changing nearly every industry, from insurance claims processing to hiring decisions.
The term “black box medicine” refers to the fact that, in many cases, the algorithm itself is proprietary. Providers who use these algorithms to support decision-making don’t see how the algorithm makes its recommendations or the data sets on which the recommendation is based.
The result is that the recommendations provided by the algorithm can be difficult to analyze for accuracy. For instance, when an algorithm recommends a particular dose of insulin for a diabetic patient, questions may arise: Why that dose? Is that dose appropriate, and if so, based on what criteria?
Questions of Liability for Black Box Medical Algorithms
When a physician or provider uses AI as part of the decision-making process and harm results, who is liable?
Questions of liability in medical AI algorithm malpractice cases have only begun to arise as the technology has become more commonplace. As a result, answering these questions can be difficult.
In medical malpractice claims, one place to start is with the provider. The rise of AI has not yet changed the fact that providers are typically held to the standard of care in the relevant medical community. Consequently, the use of an algorithm is unlikely to absolve a provider of responsibility if an error occurs that the provider would ordinarily have been expected to catch.
On the other hand, as AI becomes more commonplace, cases may arise in which the standard of care is to trust the computer’s recommendations. In these cases, providers may not be found liable if they use the AI, even if the algorithm itself if faulty.
When the algorithm itself appears to be the culprit, the question presented may be more productively posed as one of product liability rather than medical malpractice. Attorneys can focus on the company, or companies, that programmed the AI, manufactured or designed its hardware and software, packaged the system, and so on.
Difficulties in product liability claims related to black box medicine will likely come from the data security and privacy spheres. While US courts appear not to have tackled questions of medical AI trade secrets yet, arguments as to trade secrecy or other intellectual property protections may arise.
What Experts May Be Needed in Medical Algorithm Malpractice Claims?
Experts in medical AI malpractice claims may need to focus on questions of provider behavior or on the efficacy and appropriateness of the algorithm itself. Which type of experts are required in a particular case will depend on which types of claims are being made.
To focus on provider malpractice, experts may need to discuss whether the use of artificial intelligence meets the standard of care for the relevant medical specialty and community. Choosing an expert who is familiar with the incorporation of AI algorithms into healthcare tools and processes can help an attorney build a compelling argument.
Experts on the creation, training, and operation of the algorithm itself are likely to come from fields like computer programming and engineering, with a focus on how artificial intelligence is developed. Here, it may help to drill down into questions about how the algorithm makes decisions in order to further refine the type of expert chosen. For instance, a case that focuses on whether an algorithm is faulty because it was trained on hypothetical rather than real data sets may call for an expert who specializes in choosing the medical data sets used to train algorithms.
Artificial intelligence in medical technology is a relatively new development. Until regulatory rulemaking catches up with these devices, attorneys litigating “black box medicine” cases are likely to find themselves in untested waters.