American Association for Physician Leadership

AI Hallucination — Tips for Preventing Digital Delusions in Healthcare

Neil Baum, MD


Petar Marinkovic


Jan 9, 2025


Physician Leadership Journal


Volume 12, Issue 1, Pages 36-38


https://doi.org/10.55834/plj.6714506837


Abstract

The AI lexicon uses the term “hallucinations” to describe generative AI responses that provide inaccurate, false, and misleading information in AI chatbots. We must identify these hallucinations, whether mild deviations from facts or outright fabrications, and remove them from any content generated by chatbots. This underscores the necessity of fact-checking chatbot responses to ensure the accuracy and reliability of your content. This article explains how AI hallucinations occur and suggests ways to avoid and remove them.




Artificial intelligence has transformed all aspects of society, including healthcare. Increasingly, people are turning to the internet to self-diagnose and determine treatment for illnesses, and AI chat boxes are designed to provide fast, easy-to-use information.

For healthcare providers, AI has the potential to improve medical care by easing the burden of documentation, reading radiographs, and analyzing data.

Unfortunately, many patients and caregivers are not aware that AI programs are often trained by misinformation or incomplete information.

Let’s say you asked an AI tool to develop a consent document for the surgical removal of a cancerous kidney. If the tool was trained on the surgical removal of the prostate gland and needed more exposure to renal surgery to understand the differences between the two organs, it might not be able to provide an accurate response. The platform would generate a draft because you prompted it, but it may leave out essential sections specific to nephrectomy for cancer or, even worse, might make them up. These are AI hallucinations.

Language-related problems can also cause inaccuracies and hallucinations. AI models must follow the constant evolution of language to avoid barriers caused by evolving terminology and jargon. When prompting an AI tool, it is best to use plain language, when possible, free of jargon, to leave less room for misinterpretation.

AI hallucination is more than a random software bug — it has considerable real-life implications that can harm your practice.

Providing false information deteriorates patient trust and can lead to patient dissatisfaction, loss of credibility, and potential legal issues. This underscores the importance of vigilance and caution when using AI technology.

PREVENTING AI HALLUCINATIONS

It is difficult to fully control an AI platform’s output, but you can steer your solution in the right direction to minimize the risk of generating false information. Here are a few suggestions for doing so:

Provide references and data sources

Providing specific resources for the topics you are addressing is one of the easiest ways to help the AI program avoid false information. For example, you can prompt an AI platform to look for reputable articles and credible research published in specific peer-reviewed journals or from the websites of organizations you trust regarding data. Academic institutions typically have an .edu domain, so you can prompt the AI tool to draw only from those websites.

If you are writing an article about immunotherapy, you can prompt the AI program to use only data from the Mayo Clinic and the Journal of Immunotherapy. This will ensure results from a reputable source and save you from having to fact-check statistics.

Offer sufficient context

Tell the AI model why you need what you are asking for. Providing context increases the usefulness of the output.

Consider the following two prompts as an example:

  • What are the most common side effects of Nubeqa?

  • What side effects of Nubeqa should I warn patients constitute an emergency?

The second prompt provides relevant information the AI tool can use to focus its answer.

When prompting an AI tool, try to give it as many specifics as possible. For example, instead of saying, “Write an introduction to an article about options for treating prostate cancer,” prompt the AI tool with:

“Write a 150–200-word introduction to an article about treatment options using immunotherapy for metastatic hormone-resistant prostate cancer. The article will be published in a newsletter for patients, and the tone should be informal yet informative. Use medical literature from 2022 with relevant statistics about the current state of immunotherapy.”

This prompt specifies all the necessary details: the content’s tone, length, sources, and purpose. It gives the AI tool enough direction to ensure accuracy while reducing the need for extensive editing.

Use limited-choice questions whenever possible

Ambiguous prompts are among the most common causes of inaccuracies and hallucinations because they give an AI tool too much room to misinterpret the intent behind them.

To avoid such issues, replace open-ended questions with limited-choice ones. Here are some examples:

  • Open-ended: How has treating prostate cancer changed in recent times?

  • Limited-choice: What are the outcomes for men with castrate-resistant prostate cancer (CRPC) during 2021 and 2022 according to data from the National Cancer Institute?

  • Open-ended: How do patient testimonials affect traffic to my website?

  • Limited choice: Do patients trust testimonials on websites more than they trust information in newsletters? Support your answer with research that is no older than two years.

As a side note, prompt your AI tool to let you know if it can’t find credible data to support its claims. This will reduce the risk of the AI tool making up facts without resources.

Give your AI tool a specific role

Role designation gives AI tools additional context behind your prompt and influences the response style. It also enhances the content’s accuracy.

To assign a role, you can use a prompt like this one:

“You’re a pathology lab specializing in pathologic identification of prostate cancer biopsies. What advice would you give to a urology practice that wants to bring the lab ‘inside’ the office and avoid submitting slides to an outside lab?”

A detailed prompt like the one that clearly outlines the AI tool’s role has a much higher chance of yielding a positive answer than a generic instruction to provide lab tips for small group practices.

In sum, if you explicitly instruct an AI model to prove its expertise through a role, it should be more accurate.

Adjust the AI tool’s temperature

AI tools like ChatGPT have temperature settings that let you directly affect the creativity and randomness of the AI model’s response, which can drastically reduce the risk of hallucinations.

The temperature range is 0.1–1.0, where higher values indicate increased response creativity. The values between 0.5 and 0.7 are considered general reference points suitable for general content writing.

As a medical professional, you might want to choose under 0.5 because doing so makes the response more accurate.

Adjusting the temperature of your AI model doesn’t require any complex or technical processes, you simply tell the AI tool which values to use within your prompt. By doing so, you can ensure more control over the model’s creativity and minimize deviations from facts.

Leveraging negative prompting

AI hallucinations most commonly occur because of lack of “guardrails;” negative prompting can prevent them. Negative prompting involves telling an AI model what you don’t want to see in the response.

Here are some examples of negative prompts:

  • “Limit references to the last five years.”

  • “Don’t provide any health advice.”

  • “Don’t include any information found on any pharmaceutical sites.”

The only caveat of negative prompting is that you need to think ahead to limit the most common causes of hallucinations, which might take some getting used to.

Still, like anything else in medicine, your ability to receive accurate and helpful information increases with time and experience.

Fact-check AI content

Regardless of the response you receive, avoid copying and pasting the AI-generated content. Humans must verify and fact-check the response even when AI becomes more accurate. Adjust your prompts to provide your AI model with sufficient context and details, and then verify all the key information within the output to stay safe.

Verify everything before publishing to prevent false claims caused by hallucinations.

BOTTOM LINE

AI has grown exponentially and evolved significantly in the last few years, but the technology is still in its infancy. It is not surprising that it is not completely reliable, and we have yet to see a model that we can trust fully.

Until then, ensure human supervision while using any generative AI tool. Doing so allows you to create content without putting your reputation or patient trust at stake.

Resources

  1. Marinkovic P . What Is AI Hallucination? 8 Steps To Avoid AI Hallucinations, Surfer blog. December 1, 2023. https://surferseo.com/blog/ai-hallucination/

  2. Surfer AI blog. Can You Use AI Content on Your Website? Surfer, May 18, 2023. https://surferseo.com/blog/can-use-ai-content/#when-should-you-not-use-ai-content

This article is available to AAPL Members.

Log in to view.

Neil Baum, MD

Neil Baum, MD, Professor of Clinical Urology, Tulane Medical School, New Orleans, Louisiana, and author of Medicine is a Practice: The Rules for Healthcare Marketing (American Association for Physician Leadership, 2024).


Petar Marinkovic
Petar Marinkovic

Petar Marinkovic is a marketing graduate student and freelance content writer focusing on SaaS SEO content.

Interested in sharing leadership insights? Contribute


For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL providers leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)