American Association for Physician Leadership

Artificial Intelligence and Machine Learning: What Physician Leaders Need to Know

Lola Butcher


May 8, 2023


Physician Leadership Journal


Volume 10, Issue 3, Pages 31-33


https://doi.org/10.55834/plj.6915833702


Abstract

Artificial intelligence and machine learning, already ubiquitous in search engines, online shopping, and other everyday activities, will soon become commonplace in healthcare delivery. Clinicians need support in evaluating, trusting, and using the emerging tools.




Cornelius A. James, MD, co-authored a National Academy of Medicine (NAM) discussion paper last year on the adoption of artificial intelligence in medical diagnosis. As a result, other physicians frequently ask him, “Is AI going to replace me?”

James, clinical assistant professor in the departments of internal medicine, pediatrics, and learning health sciences at the University of Michigan, says, “My response, and I think it’s the response of many who are doing work in this field, is this: AI is not going to replace physicians. Physicians unable to use or engage or team with AI will be replaced by physicians that do have the knowledge and skills to effectively do so.”

That response, of course, triggers many other questions: When? How? What to do?

In some health systems across the country, AI is already on the job, synthesizing data from electronic health records to suggest diagnoses, scanning blood samples for bacteria, reading radiology images, giving automated feedback to patients who receive physical therapy on a digital platform. But widespread use of proven AI-enabled technologies in a way that changes the standard of care has not yet happened, and many clinicians have no experience using the emerging tools for clinical care.

The physician leaders most knowledgeable about AI in healthcare say these technologies hold tremendous potential — and tremendous peril. Clinicians need guidance and support as the healthcare industry scrambles to identify and disseminate best practices.

Michael E. Matheny, MD, MS, MPH, director of the Center for Improving the Public’s Health through Informatics at Vanderbilt University Medical Center, compares AI in healthcare today to laboratory testing before certification and standardization made it safe for clinicians to use without worrying about accuracy.

“Most of the AI tools are relatively immature, so trust is low and there’s a lot of uncertainty about how the algorithms work,” he says. “And then you’ve got variation in how the tools are being deployed.”

More broadly, most health systems have no experience deciding how to use AI-enabled technologies to improve care. Buying an AI-enabled tool just because it is available is likely a waste of money and energy.

Suresh Balu, program director for Duke Institute for Health Innovation (DIHI) and associate dean for Innovation and Partnership for Duke’s School of Medicine, explains, “It has to address a practical problem that we are facing. Our work is not about developing a model — it’s about developing a model that is going to fit into the clinical workflow so that it can be put into practice and adopted.”

The scope of change coming with AI can feel overwhelming, but sitting on the sidelines until AI for healthcare is mature puts a health system at risk for being unprepared for those changes. Physician leaders shared with PLJ their how-to-get-ready perspectives.

How Will AI Change Healthcare Delivery, and How Quickly Is That Change Coming?

AI is a collection of computer algorithms designed to solve specific tasks. Various subsets of AI — machine learning, deep learning, supervised learning, unsupervised learning, and others — are used for various tasks.

Matheny describes three common uses for AI in healthcare: (1) back-office applications associated with the business of healthcare; (2) diagnostics, such as medical imaging; and (3) clinical decision support algorithms that predict patient outcomes.

Although vendors are aggressively marketing hundreds of AI applications for healthcare, penetration is not yet widespread, says Matheny, an editor of NAM’s Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. The most common uses are in back-office operations and imaging; some academic medical centers are using AI for clinical decision support, but many are not.

Predictions about the pace of widespread adoption for any of the three buckets vary, depending on who you talk to. Richard Shannon, MD, senior vice president and chief medical officer for Duke University Health System, expects AI to change clinical decision-making in the very near term. Duke AI Health is already using predictive algorithms for the generation of clinical outcomes for a wide range of medical conditions.

Shannon, who is a senior advisor for Duke AI Health, says, “I think we will see a transformation occur, particularly around clinical outcomes prediction, within the next three to five years.”

“We can better risk-stratify patients at the beginning, but also give more specific outcomes data to people,” he says. “What’s your chance of actually going back to work one year after you have your heart transplant? What’s your chance of being able to go on vacation with your grandkids six weeks after your heart surgery?”

AI Vendors Are Knocking at Our Door. Who Do We Let in?

Family physician Steven Lin, MD, is executive director of the Stanford Healthcare AI Applied Research Team, which is comprised of clinicians who want to use AI-enabled technologies to support primary care and population health by freeing them up to improve the quality of care.

“A huge part of our work, whether it’s clinical decision-making or clinical documentation, can be augmented by automation,” Lin shares. “Finding ways that AI and machine learning can improve the current delivery of healthcare using the best practices that we know — that’s what I’m really excited about.”

So far, Stanford does not have leadership directives that clinicians must adopt AI tools, nor does it have many provider-driven “I want to use this” requests for AI technology. “Mostly it’s the industry coming in to say ‘Hey, we have this cool technology — can you use it?’” Lin explains. “And it’s a complicated vetting process that varies at every institution.”

At Stanford, where each division or department makes its own decisions, Lin worries about the information they have to work with. Most AI technologies are marketed using performance statistics based on retrospective data. Only a few AI technologies have been tested in randomized controlled trials. And the Food & Drug Administration, which is still relatively new to AI regulation, is evolving its framework for approval.

“So we can’t rely on the current FDA pre-market approval system to really know whether or not a system works,” Lin warns.

Undaunted, he encourages those responsible for adopting AI technologies to use a framework published by a NAM working group last year that suggests a technology should be adopted only if it meets four tests:

  1. Reason to use: Will the application align with the financial incentives and reimbursement policies that will justify the investment?

  2. Means to use: Does the organization have the resources — both the information technology and the people trained in its use — to not only deploy, but also maintain this AI tool?

  3. Method to use: Does the technology fit seamlessly into clinical workflows? “Is it actually improving the efficiency and quality of care?” Lin says. “Or is it just getting in the way?”

  4. Desire to use: Do clinicians and patients trust the technology? Will the application alleviate provider burnout, or will it be viewed as another hassle that clinicians have to deal with?

What Infrastructure Do We Need To Participate in the AI Transformation?

First and foremost, Matheny advises, organizations need leadership that understands the promise of AI and the dangers associated with it. “You need to identify — either within your healthcare system or a strategic hire — the person who can help build a leadership team around the governance of these tools,” he says. “Each healthcare system is going to face the choice of when, how, and under what circumstances to deploy some of these tools, and having the expertise available to help work through those decisions is going to be critical.”

At the University of Michigan, a clinical intelligence committee is chaired by a nephrologist who serves as the health system’s associate chief medical information officer for artificial intelligence, James shares. That interprofessional committee vets AI-enabled tools and oversees their deployment across the health system.

The committee decides which tools will be piloted and, depending on their performance, put into practice. “After it’s deployed, they’re also responsible for monitoring the performance of that algorithm,” James says. “Some models continue to learn, best practices change, and populations change. So you have to make sure that models are updated and that they are still performing optimally and going to be safe for patients.”

What Are the Perils of AI in Healthcare?

AI tools can exacerbate healthcare disparities in at least two ways: (1) by using biased data that produce guidance that does not apply to all patients and (2) by limiting access to the technology to patients treated at health systems with significant AI expertise.

Duke Health AI data scientists use a three-step process to develop algorithms that are equitable and ethical. “The data (used to train the algorithm) must include enough people of color and various backgrounds so that it is universally applicable,” Shannon explains. “Many data sets are not enriched with adequate numbers of people of color so that the algorithm is biased.”

The steps in the Duke Health process are:

Step 1: Understanding all the processes involved in, for example, evaluating a patient for cardiac surgery. Knowing all the details is essential to making sure the correct data are used.

Step 2: Gathering those data points for a large number of patients that reflect the organization’s total patient population and putting them in a single dataset. “This includes the social drivers of health,” Shannon says. “It’s not enough just to know about how a person will respond to a medication. It’s equally important to know whether they’re food insecure or homeless or uninsured, because we know those factors will have an impact on their long-term outcome.”

Step 3: Test the algorithm in practice for six months to see how it performs. “To be ethical, it must be proven to work, it has to be tested repeatedly,” Shannon says. “And to be equitable, it must apply to everybody, not just a group of insured white men.”

How Can Community Hospitals and Others That Don’t Have AI Capabilities Keep from Being Left Behind?

Limiting access to AI tools that can improve health outcomes is a danger. “If you’re a safety net hospital and you can’t afford the kind of data expertise that we have at Duke, you will be denying many people access to this cutting-edge technology,” Shannon says.

With funding from the Gordon and Betty Moore Foundation, several health systems, including Duke, Kaiser Permanente, Mayo Clinic, the University of Michigan, and others across the country, are joining as the Health AI Partnership with the goal of helping all healthcare organizations learn how to adopt AI technologies.

While not all health systems have the same level of technological expertise and bandwidth, all will need the capacity to make wise decisions about AI tools and implement those tools effectively, according to Mark Sendak, MD, Population Health & Data Science Lead at Duke Institute for Health Innovation.

He spearheaded a research effort that involved 89 interviews and several usability-testing sessions with stakeholders from 11 diverse healthcare organizations and, as a result, identified eight key decision points and the steps involved in making decisions about AI adoption:(1)

  1. Identify and prioritize a problem.

  2. Define AI product requirements and assess feasibility of adoption.

  3. Develop measures of success for the AI product integration.

  4. Design and test a new optimal workflow that integrates the AI tool.

  5. Generate evidence of AI product safety and effectiveness prior to clinical use.

  6. Execute AI product rollout, workflow integration, communication, and education.

  7. After operationalization, monitor and maintain the AI product and work environment.

  8. Update or decommission the AI product or adapt processes as necessary.

“There is a role for technology in improving care, but we want to make sure that it’s done in a responsible, safe, effective and equitable fashion,” Sendak says. “The Partnership is trying to standardize the way health systems approach and adopt these technologies.”

The Partnership will start publishing guidance documents this year, and it is hoping to attract widespread participation from health systems.(2)

“The broader the engagement, the better this effort will be, because the CMOs and CEOs and CMIOs of healthcare systems will have different sets of experiences,” Duke’s Balu says. “We want to understand the way they do things to take care of the population in their communities and how we can support their clinical care teams in the use of AI and ML.”

References

  1. Anonymous Author(s). Organizational Governance of Emerging Technologies: AI Adoption in Healthcare. In Proceedings of Conference on Fairness, Accountability, and Transparency (FAccT ’23). New York, NY: ACM; 2023. https://doi.org/10.1145/foo.bar

  2. health ai partnership (haip) https://www.healthaipartnership.com

This article is available to AAPL Members.

Log in to view.

Lola Butcher

Lola Butcher is a freelance healthcare journalist based in Missouri.

Interested in sharing leadership insights? Contribute


For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL providers leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)