Decision Optimization

 View Only

Artificial Intelligence and Liability in Medicine

By Tim Stone posted Sun April 17, 2022 04:57 PM

Medicine is a science. And while specific pure sciences are capable of making perfect products, medicine remains unclear because we bring human error into the mix. Many medical facilities are starting to integrate AI to diagnose and treat medical needs in an effort to eliminate human error. 
The rapid entry of AI is pushing the boundaries of medicine. It will also push the limitations of the law.
AI is being used in health care to flag anomalies in head CT scans, cull actionable data from electronic health records, and assist patients in understanding their symptoms.
At some moment, Artificial Intelligence is bound to make an error that injures a patient. When that occurs, who — or what — is liable?
I’ll use radiology to answer this question because many believe Artificial Intelligence will have the most significant impact there, and some even believe AI will replace radiologists.
Here’s a theoretical situation to show some of the legal delays: Innovative Care Hospital, an early adopter of technology, chooses to use Artificial Intelligence instead of radiologists to diagnose chest x-ray images as a way to lower labor costs and improve efficiency. Its AI functions well, but for unknown reasons, misses obvious pneumonia and the patient dies from septic shock.
If Innovative Care develops the algorithm in-house, it will be liable through what’s understood as enterprise liability. Though the medical base isn’t legally obliged to have radiologists manage AI’s interpretation of x-rays, drawing radiologists from the process takes the risk of letting Artificial Intelligence fly solo.
In this system, a suit would probably be settled. The clinic will have factored in the cost of lawsuits in its company model. If the case goes to trial, the point that the hospital uses AI to increase efficiency won’t likely benefit the hospital, even if the protection is passed to its patients. Efficiency can be framed as “cost-cutting,” which juries don’t find as enticing as MBA learners.

AI and Healthcare 

Artificial Intelligence is balanced to significantly reduce mistakes made in health care and can already be found worldwide.
Oxford’s John Radcliffe Hospital has an Artificial Intelligence system capable of surpassing cardiologist interpretation when examining chest scans to analyze a heart attack, thereby qualifying patients to receive a diagnosis earlier than ever before.
Chinese ophthalmologists are using Artificial Intelligence to diagnose a rare eye condition responsible for approximately 10% of childhood blindness with accuracy similar to human doctors. Stanford investigators are also using AI to diagnose distinct lung cancers. 
Artificial Intelligence is also proving useful in various other medical and diagnostic measurements, including ALS, dementia, musculoskeletal injuries, cardiovascular problems, cancer, dermatology, telehealth, and even working as virtual nursing assistants.

What Occurs When Artificial Intelligence Does Not Work Correctly?

Relying on how AI is being used, the effect of Artificial Intelligence failing to perform perfectly can vary. 
Pilots and co-pilots are constantly monitoring autopilot processes, and they can fast make any necessary modifications, often without passengers detecting the inconvenience if the Artificial Intelligence fails to operate correctly.
If your bank’s Artificial Intelligence malfunctions, bank tellers or other qualified employees can step in and manually process the deposit.
If AI makes a mistake suggesting a new person to follow on your favorite social media platform, unfollowing is an easy remedy that is little more than a nuisance.
But if AI makes a medical misdiagnosis error or forgets to perform a task accurately, it can ruin a patient’s life. 

Can Artificial Intelligence Be Held Liable for Medical Malpractice? 

While the possibility for Artificial Intelligence is awe-inspiring, it will fail to hit the spot now and then. And when there is a mistake, who is the liable party for potential AI malpractice?
While everyone seems to agree that patient safety is a focus, AI’s responsibility is somewhat unclear. When used as a conclusion aide to complement physicians in diagnosis, not replace them, the liability remains to concentrate on the health care provider who used the Artificial Intelligence.
If diagnosticians, such as radiologists, use Artificial Intelligence to aid diagnosis by highlighting abnormalities on scans, they would probably still be responsible for the final understanding. If, for instance, a radiologist failed to catch a mistake such as cancer or pneumonia on a patient image or scan, you could potentially be suitable to file a medical misdiagnosis malpractice claim.
Diagnosticians who differ from their Artificial Intelligence assistants might have increased liability. If the AI used by the hospital highlights a chest lung nodule on a radiograph that goes unseen by the radiologist — and something they fail to note in their statement — the patient could receive a medical misdiagnosis. The diagnostician could potentially be liable for managing cancer and missing the AI understanding of the imaging.  
Currently, the importance of physicians and the healthcare plans that employ them can be broken into three spots: the following.

Medical malpractice

Doctors could potentially be responsible for failing to consider AI suggestions. If they do follow the AI requests and fail to meet the required standard of care, the healthcare expert could also be liable.

Products liability

Doctors could potentially face liability for their decision to enforce an inappropriate or otherwise flawed or malfunctioning Artificial Intelligence system within their approach.

Additional negligence

Doctors could also be liable if they work for or consult with Artificial Intelligence designers, and errors are bound to exist in the core algorithms.
Nevertheless, in all cases, medical malpractice needs those claiming to verify that a licensed physician’s deviation from the standard of care resulted in injury. Artificial Intelligence has not been licensed to practice therapy to date, which adds another question mark to the scope of its liability. Most benches have habitually thought it impossible to hold devices legally responsible since they are not legal people.

AI is a Product, Not a Person

When AI technology makes an error, it can lead to a misdiagnosis, loss to diagnosis, medication mistakes, delay of treatment, and other conditions of medical malpractice that cause preventable damages to patients. Patients might need further therapy or more invasive treatments or encounter life-threatening difficulties, among other damages. When patients suffer these damages due to medical negligence, they earn compensation for their losses.
Can you carry Artificial Intelligence liable for medical malpractice, just like you can medical experts? The short answer is no, because AI technology is not a licensed medical specialist or even a person who can act negligently. Nevertheless, this does not mean there is no legal alternative for patients who suffer harm due to Artificial Intelligence mistakes.
Artificial Intelligence technology is viewed as a product, not an individual. Victims can hold the product manufacturer liable for their losses when the product malfunctions and for damages by filing a product liability claim. This is one possible course of action when Artificial Intelligence errors cause injuries. Another is a classic medical neglect case based on inappropriate use or reliance on Artificial Intelligence rather than “old-fashioned” medical care.

Contact a Hawaii medical malpractice lawyer.

Legal claims arise from medical negligence as the medical field and its instruments get more complicated. Our Hawaii medical malpractice lawyer knows how to approach even the most challenging medical injury cases at Hawaii medical malpractice lawyers.