American Association for Physician Leadership

Artificial Intelligence in Healthcare: Legal Issues

Joseph P. McMenamin, MD, JD, FCLM


Mar 6, 2025


Physician Leadership Journal


Volume 12, Issue 2, Pages 35-38


https://doi.org/10.55834/plj.5448947832


Abstract

Artificial intelligence (AI) is poised to transform various facets of healthcare, but it also raises multiple legal issues that lack definitive answers or even a comprehensive set of questions at present. This essay explores key legal concerns related to AI in healthcare, including licensure, privacy, data security, regulation, and liability. Regulatory frameworks are evolving, with the FDA and FTC playing significant roles, yet the approach remains fragmented. Questions of intellectual property, malpractice, product liability, and civil procedure are complex and unresolved. Reimbursement policies are beginning to address AI, but concerns about bias and professional unemployment persist. This dynamic landscape requires healthcare professionals to stay informed and adaptable as legal standards evolve.




Artificial intelligence (AI) is likely to change many aspects of healthcare and, in doing so, uncover an array of legal issues. This essay aims to identify and briefly discuss a few legal issues, even though, at present, we lack not only clear answers for many, but also a comprehensive set of questions.

LICENSURE

Most states define “practice of medicine” rather broadly. In Virginia, for example, it means “prevention, diagnosis, and treatment of human physical or mental ailments, conditions, diseases, pain or infirmities by any means or method” (emphasis added). Much of what AI does or will do in healthcare overlaps this description.

AI systems have passed U.S. licensing exams. Should an AI system be licensed if it engages in activities resembling those in statutory definitions? This question may seem far-fetched; however, hundreds of years ago, when corporations were brand new, many doubted whether a corporation could sue or be sued. Today, suing corporations is an industry unto itself.

At present, no state requires AI licensure. But five years from now? We don’t know the answer to that.

PRIVACY, DATA SECURITY, AND OWNERSHIP

Machine learning depends on troves of data, some highly sensitive, often hoovered up from the internet. Many people fear that training AI will inevitably trench on privacy rights.

In 1979, in Smith v. Maryland, the U.S. Supreme Court held that “A person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.” Do users of social media who share their information with numerous third parties fall under this doctrine? If so, and if AI trains, in part, on information from social media, do those using these technologies relinquish their rights to privacy in the information?

Healthcare privacy advocates invoke HIPAA and its penalties for violators. HIPAA, however, covers only specified health information that “covered entities” or their “business associates” use or disclose. HIPAA does not apply to information that permits inferences about health (for example, buying an HIV test online). Nor does HIPAA govern user-generated health information or bar data triangulation to overcome de-identification. It might not offer much protection in the AI era.

So far, 20 states have passed consumer privacy laws roughly similar to the EU’s General Data Protection Regulation. These new laws impose greater transparency requirements, stricter data collection rules, and tougher security measures. Most make exceptions for HIPAA-governed data, however, so the new laws’ stringency may have little impact on protecting healthcare data.

FEDERAL AI REGULATION

In October 2023, President Biden signed an Executive Order, Safe, Secure, and Trustworthy Development and the Use of Artificial Intelligence, calling for greater AI regulation. In response, the Department of Health and Human Services established an AI Safety Program, created an AI Task Force, and finalized a rule requiring transparency for AI in health IT.

Soon after his inauguration in 2025, President Trump rescinded the previous EO and signed a new one, Removing Barriers to American Leadership in Artificial Intelligence, providing few policy details but creating the role of special advisor for AI and crypto. The new EO requires that this new special advisor, along with the national security advisor and the assistant to the President for science and technology, create an AI action plan within 180 days. Although the action plan will probably be more lenient with regard to AI regulation than the Biden EO, any predictions at this writing are mere speculation.

Several states, including California and New York, are developing AI regulations themselves, though focused so far more on consumer protection than on healthcare. This approach, however, could create varied regulations that are difficult to comply with and/or ill-suited to healthcare.

FOOD AND DRUG ADMINISTRATION REGULATION

Many people assume that the hub of federal healthcare AI regulation is the Food and Drug Administration, as AI could be seen as a “medical device” and thus FDA-regulable. AI’s fit, however, is a bit strained; the definition of a device is “an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article….” AI is none of these. Yet the FDA has cleared or approved nearly 1,000 AI-enabled medical devices so far, and many more are on the way.

To its credit, the FDA has looked to avoid excessive regulation. The agency contends that AI is NOT a device, and so NOT FDA-regulated if it is: (1) not intended to acquire, process, or analyze a medical image or signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system; (2) intended to display or print medical information about a patient; (3) intended to support or make recommendations to a healthcare provider (HCP) about prevention, diagnosis, or treatment of a disease (allergies, drug interactions, etc.); or (4) intended to enable HCPs to independently review the basis for the software’s recommendations, so there is no intention that HCPs rely primarily on such recommendations to make diagnostic or therapeutic decisions.

The 21st Century Cures Act is similarly circumspect. From the concept of “devices,” and thus regulation, it excludes functions intended for (1) administrative support of a healthcare facility; (2) maintaining or encouraging healthy lifestyles; (3) serving as an EMR to the extent such records are intended to transfer, store, convert formats, or display a chart equivalent; and (4) transferring, storing, converting formats, or displaying lab test or other device data and results, or findings by an HCP, unless the function is intended to interpret or analyze findings.

Despite its restraint, however, the FDA is concerned about the transparency of AI models and the security and integrity of data generated from continuous learning approaches, among other problems; therefore, the agency is eager for stakeholder input to understand AI’s role in the life sciences and to assess potential risks and benefits.

The full effect of the rescission of the Biden EO on FDA’s regulatory approach is unclear, especially because another new EO, Regulatory Freeze Pending Review, freezes new rules “until a department or agency head appointed or designated by the President . . . reviews and approves the rule.” A recent FDA guidance on Predetermined Change Control Plans for AI-Enabled Devices is a good example, because that guidance relies on definitions in the Biden EO.

To address the complex scientific and technical issues related to digital health technologies, the FDA has established a Digital Health Advisory Committee, which met for the first time in November 2024 to discuss how the agency should review medical devices that rely on generative AI, such as chatbots. At this writing, the future role of this Committee under the new Administration is unknown.

FEDERAL TRADE COMMISSION RESPONSE

The Federal Trade Commission investigates and seeks redress for “unfair” or “deceptive” trade practices and thus could frame AI manipulation or misleading AI-generated medical advice as such. FTC may interpret Section 5 of the FTC Act to extend to healthcare providers who are seen as using AI that, intentionally or not, excludes certain populations from care. Through a civil investigative demand, the FTC could obtain a provider’s algorithms and data.

To decrease risk, HCPs should do a compliance check and strive to avoid bias. The FTC might be less aggressive in the new administration, however, than it was under President Biden.

INTELLECTUAL PROPERTY

Again, AI ingests oceans of data. So, AI could violate the copyright of a creator of art or text used in training. Current litigation is exploring those questions and whether fair use is a defense. Trademarks not created by humans are protectable, so AI has utility in trademark development.

Could a patent be issued to an AI system, as opposed to a human? In every jurisdiction that has addressed that question so far, including the United States, only one has said yes: South Africa.

MALPRACTICE

A provider’s duty to a patient arises from his relationship with the patient. Can AI have such a relationship? If so, could a patient assert a colorable claim for malpractice against an AI system?

In some states, a hospital can be liable for its staff’s negligence. What about a hospital employing AI to manage patients? More generally, a healthcare provider can be liable for the work of its agent, identified through a variety of tests, usually including the right to control the putative agent’s acts. Traditionally, however, the principal and agent can negotiate the scope of authorization, dissolve the relationship, or renegotiate its terms. AI can do none of these. Is AI more like an employee whose conduct the provider could be liable for, or is it more like a machine?

Informed consent law is murky as well. At present, for example, it is unclear whether the doctor must tell the patient that AI is being used, although it seems wiser to do so for risk management purposes. But must physicians be knowledgeable enough to explain how AI works? Must they assess whether the AI system was trained on a representative data set? Distinguish between the roles humans and the AI system will play during each part of a procedure? Compare results with AI and human approaches? Disclose AI recommendations the physician disapproves of using?

PRODUCT LIABILITY

HCPs provide services, not products, so they are immune from product claims. Is AI likewise immune from strict liability because it provides a service?

In strict liability, liability is imposed upon the seller of the product, even though the seller has exercised all possible care in the product’s preparation and sale, if it is expected to and does reach the user without substantial change in its condition.

In April 2023, several technology companies moved to dismiss a product liability claim in which plaintiffs alleged that the defendants’ social media platforms caused addiction and mental health problems in adolescents. Defendants argued that their platforms are not “products” but “services,” and plaintiffs’ alleged injuries flow from ideas expressed through those services.

The court ruled that several of the defendants’ tech services should be treated as products: parental control and age verification, limits on in-app screen time, barriers to account deactivation and/or deletion, labels for edited content, and filters for content manipulation. Reports on child sex abuse materials, however, were services, not products. Will these conclusions hold up on appeal?

CIVIL PROCEDURE IN AI TORT CLAIMS

Assuming tort claims can be asserted against AI systems, what court has jurisdiction? Ordinarily, one looks to citizenship. A corporation is a citizen of the state where it was incorporated and of the state where it has its principal place of business. But of what jurisdiction is a robot a “citizen”?

Then, once we have selected the court, we must decide what state’s laws apply. Conflict of laws doctrines rely on residency and intent to establish a domicile. But AI cannot have intent, so how do we determine domicile and, thus, applicable law?

REIMBURSEMENT

Some people fear that providers could manipulate AI in billing software to increase payments, for example, by changing a few pixels of an image to make a benign skin lesion look malignant. HCPs worry that payers may do the reverse. New York, California, and Massachusetts have found that an insurer unlawfully used an algorithm to deny or limit coverage for mental health services.

California recently enacted SB 1120, which provides, among other things, that a payer’s AI tool cannot “supplant health care provider decision making” or discriminate against enrollees in a manner violating federal or state law. Other states may follow suit.

Some progress has occurred, however, regarding reimbursement. Noridian recently issued its final Local Coverage Determination for AI-Enabled CT-Based Quantitative Coronary Topography/Coronary Plaque Analysis for various jurisdictions, including California. Medicare will now pay for this service for beneficiaries with acute or stable chest pain and no known coronary artery disease.

ADDITIONAL CONCERNS

Bias

If an AI’s training data do not come from a sampling of the population reasonably representative of its demographic and medical makeup, there is a risk that outputs may be unreliable.

The FDA is also concerned about automation bias — that physicians or patients or both might place too much weight on and trust in AI-based clinical decision software.

Professional Unemployment

Some healthcare professionals fear that AI may put them out of business. That concern is particularly prevalent among physicians in visual specialties, such as pathology and radiology. Most people believe, however, that AI will not replace physicians. Rather, physicians who become skilled in using AI will replace those who are not.

Antitrust

There is some concern that competitors could use AI to collude and fix prices. On the other hand, algorithmic pricing might encourage highly competitive behavior.

CONCLUSION

There will likely be considerable, and possibly rapid, evolution in laws governing AI in healthcare. Those using it must be alert to changes and agile enough to adapt.

Joseph P. McMenamin, MD, JD, FCLM
Joseph P. McMenamin, MD, JD, FCLM

Joseph P. McMenamin, MD, JD, FCLM, is a partner at Christian & Barton LLP, Attorneys At Law, Richmond, Virginia.

Interested in sharing leadership insights? Contribute


For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL provides leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)