Artificial intelligence raises both hope and fear for medicine’s future. Whether you see the promise of better care or George Orwell’s Big Brother says as much about you as it does the technology. Laura Landro has done a deep dive into the world of AI listening in physician exams. She brings facts and figures, the good and the potentially ugly when it comes to ambient listening technology.
This transcript has been edited for clarity and length.
Mike Sacopulos: My guest today is Laura Landro. Ms. Landro is an award-winning writer and journalist. She is a former Wall Street Journal “Informed Patient” columnist and an assistant managing editor. She’s also the author of the book Survivor: Taking Control of Your Fight Against Cancer.
Laura Landro, welcome to SoundPractice.
Laura Landro: Glad to be here.
Sacopulos: I, along with some of the staff at the American Association for Physician Leadership, was interested in your recent Wall Street Journal article, “Why AI May Be Listening In On Your Next Doctor’s Appointment.”
The article explores how AI is being integrated into medical settings through ambient listening technology. Can you describe for our audience what ambient listening technology is?
Landro: The way we described it in the WSJ story was, “even the hospital walls may have ears.” The idea of ambient listening is that the listening technology is in the background, but it is listening to everything that’s going on in the medical encounter.
The conversation between the clinician and the patient is recorded, with the idea of reducing the burden of documentation and note-taking, of staying long after hours to complete the notes, or creating the notes, which, of course, is something doctors are required to do.
Some practices use human scribes, so that’s probably one of the first jobs to be eliminated by AI. The ambient tech listens and also manages to sort out the noise. For example, if you talked to your doctor about the weather or how your weekend was as part of the patient interview, that would be edited out. What would result would be what the visit entailed, what was described, and what was decided — basically, the classic doctor’s note.
Sacopulos: How did you become interested in this topic?
Landro: I’ve worked at the Wall Street Journal for 35 years, including covering the dawn of electronic medical records and the use of technology and the internet. And when I retired from the Journal full-time, I continued to be a contributor focusing on medical and healthcare issues.
Technology has only gotten more important and more central to what’s happening in medicine. So I’ve focused on a lot of ways that technology can improve medicine and improve patient safety. My focus is very much on things that have to do with the patient, so the idea that AI could be helping to accurately record my doctor’s appointment and enable doctors to have less burnout so they don’t quit and leave us without care — all of those things just really intrigued me about this one.
Sacopulos: The article mentions research showing that the technology saves time and reduces physician burnout. What did you learn about that research when you put together this great article?
Landro: When I spoke to different institutions that were using it, there was this palpable excitement, and it was one of these things where if you could save a few minutes and get home for dinner, that was a big deal. A lot of these institutions surveyed their doctors about it, asking, “What was it that was the most important thing about this to you?” In almost all cases, they mentioned those things.
One of the big users of this is Mass General Brigham. They did a pilot program using two different kinds of ambient technology from various companies. The doctors reported cutting their daily documentation time outside of clinic hours from 90 minutes to under 30 minutes.
About 80% of them said they focus more on their patients and indicated they were more likely to extend their careers instead of retiring early. They love the technology, and the chief medical information officer there said that giving the doctors back time for their own lives was transformative.
Sacopulos: That’s the view from the clinician’s side? We don’t have to go too far down this path on the patient side to have somebody raise their hand and say, “What about privacy?” Should patients be concerned?
Landro: Yes, we’re always concerned about privacy, and I think that’s one of the confidence walls they have to get over to make this acceptable for patients.
They’d have to consent. Every time you talk to a patient, you have to say, “I’m using this technology to listen to you. Once this recording is made, the recording’s gone, but what will be left is this note. I’m going to review the note as I would any other note, and you’re going to have the note, so you’ll know whether it’s accurate or not.”
Now, on the privacy front, they have to be reassured that this is part of their medical record and will be confidential. That’s where I think a lot of people have concerns. This just adds another layer of information about them that’s going to be in those records, and it behooves the medical and legal professions to continue to convince them that private information is private.
Sacopulos: Are there certain areas of medicine where it’s not being used because of the overly sensitive nature? I’m thinking of maybe psychiatric records, addiction records, and sexual violence care. Are there some places that the medical community so far has just said, “off limits for these types of topics,” but in other areas, it’s fine to use?
Landro: In most places I talked to, the primary users were primary care docs. We’re still in the early days here. One of the doctors said to me, “We’ve only scratched the surface of what this is going to be able to do,” and obviously, all those questions will have to be answered.
However, one of the important issues is whether patients want that technology to be used in those kinds of private areas. Maybe they do want it for accuracy. You’d have to consent to every patient, and that’s a big bioethical issue.
Some of the technology companies have it specific to a specialty, so they have a special scribe that knows cardiology terms, for example. I don’t know if this has yet made the leap into mental health. They’re mostly focusing on the kinds of specialties where you’re having a medical encounter that isn’t psychiatric or counseling-related.
The other area they’re focusing on is transferring it to the hospital. It’s going to be in the ICU, at the bedside, and used by nurses. So, at that point, I’m sure someone will have to answer these questions, like, should we really be using this for substance abuse counseling, mental health counseling, or adolescent counseling?
It’s all a matter of whether you have technology that’s capable of knowing the terminology and what the important things are. And that’s all in how it’s trained with these large language models, and that’s obviously a big question.
Sacopulos: Do you know what happens if a patient objects to the ambient listening technology? My guess is that patients may be a little concerned about objecting because it could be irritating to a clinician, right? For the very reasons that it’s positive for them. “You’re adding extra hours to my day or more work for me by rejecting this technology.”
Landro: Obviously, there are those people, and those people are going to think it’s all a big conspiracy, or they’re going to really be legitimately concerned that they don’t want someone or a technology, Big Brother, listening to them. I spoke to some patients and reviewed some of the surveys that were done with patients. One woman I spoke to was thrilled with it. She shared, “The doctor paid much more attention to me. He wasn’t constantly looking at his computer, and he wasn’t scribbling things down. It was just focus on me, very present, and I felt more listened to.” So I thought that was interesting.
Sacopulos: They are certainly more listened to, right? On different levels simultaneously.
Landro: Yes, hopefully. Now, remember, the ambient scribe technology is not going to be a complete lack of distraction because as they push forward to see what they can do with this in the future, one of the things that they’re talking about or that they’re already doing is a push notification.
While the doctor is listening to a patient talk about these symptoms, the technology might introduce a little note or a little alert that says, “Should you be testing this person for diabetes?” That’s the kind of thing that possibly could be annoying on the one hand. On the other hand, the idea is to introduce prompts so the doctor doesn’t miss anything.
The one line it hasn’t crossed, and this is not regulated right now, is that if it became software as a medical device, that would mean that it was actually a black box making a diagnosis without the doctor being able to see transparently what the diagnosis was.
I think doctors like it because it relieves doctors of the burden of things they don’t want to do, but doesn’t try to replace what they do do, which is make a diagnosis, even though, as we know, it probably has some capability to make a diagnosis.
Sacopulos: Just how accurate is the technology?
Landro: Well, that’s the funny part. You keep hearing about AI hallucinations, and one study from the Permanente Medical Group talked about these mistakes and how closely it can listen. Is it really perfect yet? I thought it was amusing because the patient and the doctor were talking about the doctor’s concerns about the patient’s hand, feet, and mouth, and the AI came up with a statement that the patient had been diagnosed with hand, foot, and mouth disease.
In another case, they said that the patient had been provided with a prostate exam when, in fact, what had been said was that they should schedule a prostate exam.
That’s why none of this is ever allowed to just go through into the record. The doctor still has to look at it, but looking at it takes less time than starting from scratch, writing it, and going back to scribbled notes to put it in the computer. I think that’s where the time savings are. But hallucinations are always a concern.
Sacopulos: You mentioned there seem to be a few healthcare systems that are leading the way here, Mass General, University of Michigan, Stanford, among others. Could you highlight what they’re doing and where physician leaders should pay attention to the work that’s being done there?
Landro: I think the best thing about them is that these are big academic medical institutions. You can see what they’re doing because they’re publishing it. It’s all peer-reviewed studies. They’ve published some really interesting papers over the last couple of years about their findings, and I think they really have chapter and verse about how they rolled these things out, what doctors were asked to do, and how many doctors wanted to do it versus others who were less thrilled about it.
I think that because there’s already a body of literature, there’s a lot of data out there.
A lot of companies are approaching it in different ways, offering different bells and whistles or focusing on different kinds of medical specialties. The one I mentioned in the article, Arden Healthcare, is not an academic medical center. It’s a big healthcare system, but they’re in some rural areas. One of the things that was important to them was getting doctors to want to work in some of these places. You have to give them something that makes their life a little easier, and this was considered a perk, to be able to have something like this. It enabled them to say, “Hey, if you come here, we have this great technology that won’t require you to stay an hour after you’re done making up notes.” So different hospital systems have different incentives that they can offer for this.
Sacopulos: Have you seen the technology or what an exam room looks like? I’m trying to envision. … Is this something that the clinician wears? Is it something that’s in each exam room? Physically, what are we dealing with?
Landro: It can run on anything. It can be on the doctor’s phone, it can be on a mobile device, it can be on a laptop. It can be mounted on a wall if you’re in a hospital room, so the technology can basically be delivered by any device or any system.
I think that’s what makes it easy, because if the doctor has his phone nearby, that’s where the prompts might come up. “Check this patient’s blood sugar.” That is one of its attractions. It can go with you or stay where it is, whatever is preferred.
Sacopulos: And I assume it’s being used in telemedicine settings as well.
Landro: I wouldn’t see why not. Again, it’s still in the early days where they’re trying to keep it in the actual encounter. Telemedicine has become such a big part of many doctors, the way they interact with patients and distance, and I’ve had my share of these appointments. It’s hard to see why it wouldn’t work, although that hasn’t been the focus.
The focus in these early days has been the primary care patient encounters, the cardiology encounters, the ones where the patient is in with the doctor.
Sacopulos: You may know that the law is notoriously behind, and you mentioned that there’s no regulatory structure at the moment. Do you foresee there being regulations or rules of the road for this technology as we move forward?
Landro: Well, again, that goes back to whether you cross the line. A lot of these scribes are very specifically marketed as note-taking and documentation tools, but they are moving closer to when does it cross from suggesting a diagnosis to saying, “This is the diagnosis.”
I think the “software as a medical device” rules are basically that if it influences clinical decisions, it could fall under FDA oversight. And there are a lot of enhancements that could come along that could trigger a question of, wait a minute, is this something we should be regulating? And then there are liability concerns.
So let’s say a scribe makes a faulty inference, or as we discussed with the hand, foot, and mouth disease, or if it omits a critical part of the encounter. What if it nudges a doctor toward a certain diagnosis and then there’s a gray area of responsibility if it’s wrong. There are tons of questions about this, and those people with big law books in their office should probably be thinking about them.
But when do you cross over from transcription to summarizing, analyzing, guiding? There are decision support tools that do fall under software as a medical device, but I think some of the terms I heard were, “Oh, ambient AI is decision-adjacent.” It’s coming close to making that decision for you.
Microsoft acquired Nuance a few years ago, which was the first one everybody knew about in this space, and they’re very specific. They call their technology Copilot and are very specific in saying, “We’re not a pilot, we’re not an autopilot. We’re a copilot, so Copilot. We’re with the doctor but the pilot is the pilot and they’re in charge.”
Sacopulos: Something that you mentioned that really caught my attention in the article was the reference to Press Ganey in voice tones. Could you talk a little bit about that?
Landro: There is this whole speech recognition, speech interpretation technology that’s starting to be used in research. Press Ganey, as you know, is a surveyor of patients. They probably dominate the market for those post-hospital surveys that you get, the consumer survey of their hospital experience.
Press Ganey is using some of its knowledge of patient satisfaction and patient experience to work with the speech recognition idea. And once they have that melding of what patients say and what their voices may carry, then they hear how the doctor reacts.
One of the things they’re looking at is whether the doctor is being empathetic enough. Hospitals use these surveys to train their doctors if they’re getting bad satisfaction ratings, and I think one of the ideas is if we can help doctors, we could coach them on being more empathetic by putting together what we know with what Press Ganey is able to determine from these patient satisfaction surveys.
You analyze the content and tone of the conversation between the patient and the doctor alongside the survey data from patient experience, and then you can coach the doctor to have difficult conversations and show more empathy.
Sacopulos: Very interesting. Some of our audience members will likely remember Dragon and Nuance for dictation software. Who are the new tech players in this space?
Landro: There are quite a few of them in addition to Microsoft. Nuance. Ambience. There’s one called Augmedix that’s publicly traded. Abridge, that was started by a cardiologist. I think it was piloted at UPMC. One called Onpoint, which is very much tailored to specifics like listening to patient visits, generating the structured notes, integrating it into any electronic health record, but they emphasize specialty-specific content, so orthopedic exams, sports medicine, surgical specialties. And there’s another one called Suki.
So what you’ve got is a lot of people crowding into this space and trying to differentiate themselves, like, “We’re really good for this specialty” or “If you’re a small group practice, this might be better for you.” I think you’re going to see a lot of deals get done in this space. Everybody who comes up with an idea like this probably has the idea that one day, maybe Microsoft will buy it, but it’s a very hot space right now. Lots of names. I think, in fact, Microsoft acquired Nuance and a couple of others. The idea is that all have to integrate with whatever ... Epic, obviously, which is the majority of health systems that I’ve talked to, Cerner, which is now Oracle. You have to be able to integrate with these systems so that it can just go right into the electronic health record.
I think you’ll also see, if you have an Epic system, for example, and one of these systems, maybe you’ll be able to allow it to add other features. So I think it’s going to be a little bit of an unsettled space until the dust settles and the players that are really adding value emerge beyond Microsoft, but there are definitely some out there that already have a good foothold.
Sacopulos: As our time together draws to a close, I’m interested in what you see this space looking like in 24 months, where you see things headed.
Landro: I think there’s some concern that this just means doctors will be told to use their extra time to see more patients. I’ve heard some of that concern, so that will have to be figured out. If the idea is to prevent burnout and save time, I don’t know why you would want to use the saved time to add more patients.
One of the most important things in healthcare, obviously, is billing and coding, and figuring out how all this is going to save us some money, but at least not cost us money.
And all these systems, the things that I think they do in addition to the medical encounter, is to go to the people in the back office and say, “Here’s what you can code for this. Here’s what you can code for that.” It will auto-generate codes, and again, those have to be reviewed, but the idea is that that’s going to save time and digging and they’re still going to be able to find ways to do this more efficiently than they do now, or do it more accurately or do it more convincingly, I guess, for the payers.
Sacopulos: Excellent. The Wall Street Journal article is titled “Why AI May Be Listening In On Your Next Doctor’s Appointment.” My guest has been journalist Laura Landro. Thank you so much for being on SoundPractice and for your time and good reporting.
Landro: Thank you.