top of page
Search

Navigating the Gray Area

AI in Healthcare 

Published February 17 2025

Analysis by Simran Agarwal


Artificial Intelligence (AI) has increasingly grown in recent years and is no longer something of the future. Rather it’s a growing force actively shaping our world and the way we view and interact with our surroundings, raising critical questions about its impact and regulations. It is defined as a branch of computer science that aims to make machines capable of performing tasks that normally require human intelligence (3). These tasks include, but are not limited to, solving and making decisions, and recognizing patterns. Both AI’s newfound significance in society, as well as our dependence on it, have continued to grow, along with questions on how to regulate AI in the legal sphere.


AI is being increasingly used in the healthcare realm. AI is utilized by analyzing data from X-rays, CT scans, and MRIs and discovering or developing drugs, as well as transcribing advanced research into more accurate data (5). However, many ethical and legal dilemmas have arisen with the overreliance on AI.


Since physicians can use AI as a clinical tool, there has been a gray area in the agency of AI versus physicians. What this essentially means is that physicians who rely on AI to test for diseases and plan treatments are no longer held liable if the AI's assessment is incorrect, creating legal issues. In the past, when a patient sued a physician, they could be tried for negligence for causing harm to a patient (4). However, now with AI, it is unclear whether the negligence falls on the AI or the physician.


The United States does not currently have an established legal framework on how to regulate and control AI, and there is little legal precedent in determining liability. In terms of legal liability, UnitedHealthcare, one of the largest insurance providers, is currently facing a class-action lawsuit for negligence with its use of an AI algorithm called nH predict. The lawsuit, which was filed in the court of Minnesota, alleges that NH—a predictive algorithm—denied many elderly patients extended care and forced patients to pay for necessary care out of pocket. The plaintiffs are deceased patients who were previously covered by Medicare Advantage, which is supported by UnitedHealthcare (2). 


When the case was appealed in a federal court, 90% of the denials nH gave out were reversed, with the court stating that the algorithm was inaccurate. This case is still currently being decided with thousands of individuals suing, and billions of dollars of damage at stake. 


This case represents not only the dilemmas of overreliance on AI but also the legal problems that are arising (2). With the lack of human involvement and oversight of AI, major companies are not facing the proper repercussions that come with their patients’ deaths. Additionally, courts are not well equipped and do not know how to set proper standards in terms of regulating AI in healthcare.


In both the legislature and executive branches, the control of AI is a hazy realm. President Biden signed an executive order on the safe, secure, and trustworthy development and use of artificial intelligence. Section 8. iv. A states that “a common frame for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings..for associated incidents that cause harm” (6). This section states that President Biden and his administration are committed to creating new approaches to protect the American people, while also considering clarifying or reinforcing existing guidelines. 


Biden hopes this order will provide more standards for security and regulation.

Many professors and lawyers have also been consulted on whether the current standards are sufficient to regulate AI. Nicholson Price, a law professor at the University of Michigan, stated that the executive order was “pretty vague but set our reasonable goals” (1). Following Biden’s order, the Department of Health and Human Services issued a rule in December 2023 requiring more transparency around AI. Ryan Clarkson, founder of Clarkson Law Firm and one of the main lawyers in UnitedHealthcare’s lawsuit, took an opposing approach stating “Regulators and legislators are trying to keep up with it, but are not doing a great job” (1). Clarkson’s statement reflects a lot of the uncertainty surrounding regulation as even when standards are enforced, many are still left unsatisfied. 


While the future of AI is unstable, lawyers anticipate more cases similar to UnitedHealthcare’s. However, there is hope that Clarkson’s lawsuit could “go the distance” and help define a new precedent in terms of AI that could help the federal government set more clear and strict standards that will prevent AI from being misused.



  1. Bloomberg Law, “AI Lawsuits Against Insurers Signal Wave of Health Litigation,” February 1, 2024. https://news.bloomberglaw.com/health-law-and-business/ai-lawsuits-against-insurers-signal-wave-of-health-litigation.

  2. Forbes. “AI Ethics Essentials: Lawsuit Over AI Denial of Healthcare,” November 16, 2023,” https://www.forbes.com/sites/douglaslaney/2023/11/16/ai-ethics-essentials-lawsuit-over-ai-denial-of-healthcare/#:~:text=The%20lawsuit%2C%20filed%20in%20the,insurance%20laws%20in%20various%20states.

  3. Los Angeles Pacific University. “Revolutionizing Healthcare: How is AI Being Used in the Healthcare Industry?,” December 21, 2023. https://www.lapu.edu/ai-health-care-industry/

  4. National Library of Medicine. “The future of artificial intelligence in medicine: Medical-legal considerations for health leaders,” March 31, 2022. https://pmc.ncbi.nlm.nih.gov/articles/PMC9047088/

  5. University of Illinois Chicago. “What is (AI) Artificial Intelligence?,” May 7, 2024. https://meng.uic.edu/news-stories/ai-artificial-intelligence-what-is-the-definition-of-ai-and-how-does-ai-work/

  6. The White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use 

    of Artificial Intelligence,” October 30, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/


 


 
 
 

Comments


bottom of page