American Association for Physician Leadership

Strategy and Innovation

Demystifying AI in Healthcare: Historical Perspectives and Current Considerations

Daniel J. Quest, PhD | David Upjohn, MS | Eric Pool, EdD, PMP, ITIL | Ronald Menaker, EdD, FACMPE | James S. Hernandez, MD, FCAP | Kenneth Poole, MD, MBA, CPE, FACP

January 8, 2021

Peer-Reviewed

Abstract:

As the healthcare profession explores the use of artificial intelligence (AI), the number of questions and misconceptions seems to increase. These questions and misconceptions are of particular importance in healthcare because healthcare is the next frontier for using this technology. The authors provide a history of machine learning, define key terms found in AI, and present case studies showing how AI is being used to reduce errors, automate calculations to improve outcomes, automate tedious contract negotiations with insurance providers, and provide better overall outcomes and more cost-effective experiences for patients. They also discuss the limitations of this technology and present recommendations for governance and oversight.




The origins of AI can be traced to the 1950s. In 1950, Alan Turing, a British mathematician and logician, questioned whether machines had the ability to think.(1) However, AI wasn’t fully conceptualized and named until the Dartmouth Summer Research Project of 1956, which was a 6–8-week workshop comprised of mathematicians and scientists discussing the simulation of intelligence by machines.(2) More than a decade later, researchers incorporated AI as expert systems in life sciences.

In the early 1970s, AI found its way into healthcare and began to be applied to biomedical problems. The 1970s saw a proliferation of AI research, specifically in medicine. An international AI journal, Artificial Intelligence in Medicine, was founded and in 1980, the American Association for Artificial Intelligence was established with a subgroup on medical applications.(3) AI was incorporated into clinical settings in the decades that followed. Fuzzy expert systems, Bayesian networks, artificial neural networks, and hybrid intelligent systems were among the new uses of AI in healthcare.(4)

In 2012, Geoffrey Hinton and colleagues released AlexNet.(5) This introduced the current wave of interest in deep learning because it showed that deep neural networks have superior performance characteristics to more conventional AI pipelines.

In recent years, deep neural networks have shown performance on par with human experts for many tasks.(6) Over the past decade there has been a significant uptick of investment in AI and deep learning in healthcare applications. By 2017, AI in medicine had grown into the predominant industrial application of AI in terms of aggregate equity funding.(7)

Eventually, two branches of AI used in healthcare evolved: physical and virtual. Physical AI includes the use of devices and robots to assist patients and providers in delivering care. Virtual AI is characterized by machine, or deep learning, in which algorithms are created through repetition and experience.(8)

A Primer in AI

Natural intelligence is intelligence demonstrated by animals or humans. In contrast, artificial intelligence is intelligence demonstrated by machines. Traditionally, systems are built from simple rules of the “if-then-else” variety. For example, if a patient’s blood pressure is consistently elevated, certain tests must be ordered. If a patient’s LDL cholesterol suggests increased cardiovascular risk, statin therapy should be recommended.

Today, most best practice advisories/alerts (BPAs), which are patient information notifications built into electronic health records, employ logic of this type. The set of rules can be deep, and decision trees have been used to code very deep sets of rules.(9) Medicine also routinely uses computation to calculate scores, such as the Charleston Comorbidity Score.(10)

BPAs and score calculations are examples of machine intelligence and are widely used to standardize and improve the quality of care. AI is more than just a rule-based system or a statistical score computed from fitting a function to data. Modern AI techniques also often include “agency,” meaning the machine is able to perceive its environment and select actions to achieve goals. The most advanced examples of agency have been demonstrated by the Deep Mind team playing Atari video games and beating human champions at the game of Go.(11,12)

At this time, goals in medicine are quite modest because machines currently lack generalized intelligence, or the ability to understand and learn any intellectual task as a human can.

A common strategy to achieving goals, called supervised learning, uses data and human-generated actions to “learn” from the human’s experience what the machine should do. In many cases, this is far less time-consuming and generates better results than hand coding if-then-else logic or complex mathematical equations.

For example, in radiology, experts identify organ boundaries, then the machine is trained to identify organ boundaries on new cases based on the patterns found in the expert-labeled images. In these cases, the AI has limited agency (e.g., calculate something we could not calculate easily in the past to improve outcomes and the quality of care for patients), but it does automate something that more conventional approaches could not. In other cases, unsupervised learning, the AI must accurately sort, filter, and organize data to make them more useful or to find patterns. The data are not pre-labeled as in supervised learning, nor is the task pre-specified. In practice, these techniques are commonly used to organize, search and compare clinical notes or genomics results.

Finally, reinforcement learning is applied in areas such as operations research to optimize existing processes and reduce costs. In reinforcement learning, the AI is given a finite set of actions and feedback loops with the environment and is asked to find an optimal policy subject to constraints such as patient safety and staff happiness.

Some Challenges and Opportunities

As illustrated in the majority of examples provided later in this article, when a machine performs a task, it is assumed that the results of that task are supervised by healthcare providers/patients and that the task is “helpful.”

Currently, AI performs actions in a targeted way, and the clinical record is annotated so providers understand that the task was done, why it was done, and what the results were. Loose manual supervision, especially when the algorithms are poorly understood, and a vague definition of “helpful” are not sufficient to ensure that AI acts in an ethical manner. AI has been found to reflect the biases in the data it was trained on, to pose challenges for the future as collective medical knowledge migrates toward automated systems, and to create conflicts of interest where patients could be steered toward improving quality metrics without improving care.(13) In addition, as the pace of automation accelerates, manual supervision of AI will become increasingly impractical.

For all of these reasons, AI practitioners are researching and implementing explainable AI, or XAI. XAI is AI implemented in such a way that the results are understood by a human expert. Initial XAI techniques include partial dependency plots, Shapley Values, and LIME analysis.(14) These methods “open up the black box” and allow experts to debug AI algorithms even if they don’t have the implementation. This works by creating scores that represent the contribution of individual data points in determining the overall prediction.

For example, if a method calculates total kidney volume based on a medical image, each pixel in the original image will have contributed to the overall score. These scores are tabulated and the contribution of each pixel can be overlaid on the original image to gain an understanding of what the algorithm “thinks” is going on — that is to say, the amount each data point contributes to the overall score.

This also allows institutions to supervise the actions of AI algorithms systematically, regardless of who created the AI algorithm. Thus, algorithms can be supervised in much the same way that humans are supervised today: through quality checks, outlier checks, boundary checks, and clinical-grade standard operating procedures. Other industries, such as the automobile industry with its self-driving vehicles, have robust data architectures for feeding errors and corner cases back into the system to improve the system.(15)

Currently, AI algorithms need to go through a rigorous process with the FDA to be approved for clinical use. This process is focused on correctness and reproducibility of the algorithm in a relevant clinical context. Logical next steps should include looping the XAI supervision processes into government oversight and institutional oversight (e.g., IRB protocols) to ensure the best outcomes for the patient.

These measures will greatly improve the use of AI in the top healthcare facilities with the resources to implement state-of-the art compliance and supervision. For the entire world to benefit from AI algorithms, significant changes to our current methodology of building AI are necessary.

There are significant obstacles to worldwide adoption of AI in healthcare.

For supervised learning methods, large training sets or datasets that enable the creation of applications must be obtained in new regions of the world where the algorithm will be deployed, and the algorithm will need to be reparametrized, retrained, and redeployed. For unsupervised learning methods, dataset imbalances will need to be addressed so the dataset is representative of the population it serves.

In reinforcement learning applications, parameters and workflow assumptions will need to be revisited and it may be necessary to redo the entire project if certain workflows are not possible (e.g., in an area of the world without electricity).

AI currently lacks the ability to apologize (e.g., “I’m sorry, I didn’t do that right. I apologize. I will do better next time”). Apologizing is one method humans use to get feedback from others and improve on a given task. In most AI systems, there is no obvious way to reach out and collect data about how well a task was done in the same way humans do when they apologize. Surveys have low response rates. System logs don’t contain all the necessary information. AI is narrow, so it doesn’t begin doing related tasks automatically. Many of these problems don’t have solutions right now, so the narrow AI that currently exists is not going to replace humans in healthcare anytime soon.

What AI Is and What It Is Not

As mentioned previously, there is significant confusion about what AI is and where it should, or should not, be used. Before starting down the path of understanding the usefulness of AI and how powerful it can be, however, it is important to note this is a tool people can use to increase efficiency and effectiveness. It is not a replacement for people, like machines on an assembly line.

The daunting task associated with sorting historical information and locating missing information is one key area to leverage the use of AI, thereby saving time and improving accuracy. This is proven in the historical use of AI in large datasets (e.g., radiology/image analysis, digital pathology, and dementia). Again, this is the advantage of increasing efficiency while decreasing manual work, especially with the ability to learn patterns from large amounts of data, thereby enhancing prediction accuracy.

AI is also helpful with clinical research and workflow/process improvement (e.g., scheduling, event detection, and quality control). This could be one of the earlier uses of AI where some of the highest ROI for clinics occur. One example of successful implementation of AI is in diagnosis of some types of cancer with a level of accuracy rivaling that of humans.(16)

Even with all the benefits AI brings to healthcare, there are still important requirements for successful implementation of AI. As noted earlier, AI continues to require human intervention. If the AI system does not put the physician in the loop, the physician will not be able to verify the results properly and could potentially increase the risk to patient safety. All of these requirements point to the need for training. Everyone interfacing with AI (e.g., information technologists, physicians, nurses, and researchers) needs to receive adequate and consistent training to not only be productive, but also to ensure the successful implementation, use, and ongoing support of AI.

Since errors will occur, understanding the types of errors experienced with the use of AI is a priority. False positives and false negatives are some of the most common errors. To combat these types of errors, the focus should be on increasing the size of the dataset to include as many cases as possible while using the correct size cohort. It also will be important to work with clinicians to look for these events and cohorts.

Again, human involvement is required. If the outcome is not a gold standard then the predictions may not be relevant or accurate, such as with some of the errors observed in family history and information outside patient age ranges.

Mayo Clinic’s Use of AI

Mayo Clinic has used AI to various extents since at least the 1990s, and the number of projects, including Agile project management with its use of “…flexibility and continuous project process improvement using a team approach”(17) has grown exponentially, particularly in recent years.

Workflow optimization is a major focus of the AI initiatives, aiming to enhance the ability of providers to spend more time with patients. Areas with the most ongoing initiatives involve radiology and pathology, through AI’s contributions in image analysis, as well as NLP for analyzing unstructured and structured text in the electronic medical record.

Natural language processing (NLP) has been implemented at Mayo Clinic for more than two decades. In 2018, one initiative ran 61 projects through an NLP platform, averaging 6.6 million documents per cohort. One of the projects focused on cardiac sarcoidosis, which is relatively rare and relies on extensive chart review for diagnosis. Computation automation was created for the unstructured text in order to enhance the chart review process by cardiologists. Once trained, the program searched for key terms that indicated possible signs of the disease from disparate parts of the medical record.

Another NLP project focused on silent brain infarctions (SBIs) which are highly prevalent but underdiagnosed. The NLP tool focused on text-based radiology reports. A neurologist and neurosurgeon trained the program to home in on a list of SBI-related terms, producing an effective augmentation for the team’s work.(18)

Both of these projects are examples of unstructured data, in the form of clinical notes, being leveraged to improve our operational awareness of larger patterns in our data and to suggest opportunities for future enhancements to the patient experience.

In mid-2014, Mayo Clinic began a partnership with IBM Watson to develop the Watson for Clinical Trials Management (CTM) cognitive computing system. Watson is IBM’s artificial intelligence platform. The goal of the system was to meet the growing need to improve Mayo Clinic Cancer Center enrollment in oncology clinical trials. With so many ongoing clinical trials at a given time, it was nearly impossible for a provider to be aware of all of them and of the trial options for a given patient.

With the Watson for CTM, the AI pulls information from unstructured text data for a patient, including from radiology reports, genetic reports, and elsewhere, and matches those data with the eligibility criteria from the list of 70+ ongoing clinical trials. The program brings to the surface a streamlined list of possible trials for a provider to review. The AI does not answer every eligibility question, but it assists the provider by pulling the information to the forefront. The pilots saw increased enrollment in the three cancer groups in which AI was tested and it is currently being trained for three additional disease groups.(19)

Another research project at Mayo Clinic, sponsored by Google, seeks to understand how deep learning can reduce the burden of clinical documentation that lands on providers. Currently, it is estimated that between 35 percent and 55 percent of a physician’s workday is spent creating notes and reviewing records in the EHR. Prototypes for “autoscribe” AI assistants and other automated transcription and analysis programs show promising advances utilizing AI techniques.(20)

Image analysis is a promising application of AI that has been used heavily in areas like radiology and laboratory medicine and pathology. Radiology has the highest number of ongoing AI projects, as well as the most robust infrastructure both from an IT systems standpoint as well as from a staffing and organizational structure level.

One example project is from 2018 when a multidisciplinary team from Mayo Clinic’s Department of Radiology developed an automated system to measure total kidney volume (TKV) and total liver volume (TLV) using deep-learning techniques. The manual process for measuring kidney and liver volume is time consuming. The automated segmentation method was shown to be as accurate as when done manually, and much faster.(21)

Unlike radiology departments, where digitalization and archiving of images have been commonplace for two decades, most pathology departments do not routinely digitize histopathology images. Tizhoosh, et. al., delineate several technical challenges.(22) Some pathology departments are using whole-slide imaging to digitize images from glass slides. This allows for better reproducibility, efficiency, and productivity. It also decreases inter-observer and intra-observer variability inherent in pathologists’ diagnoses using traditional microscopic analyses.

Digitization of pathology images permits machine learning and deep learning using traditional histopathologic images. AI is most advanced in the diagnosis of breast carcinomas.(23)

Conclusion

AI capabilities in healthcare are unlikely to replace clinicians. Rather, its use has the potential to make healthcare more succinct and personalized by augmenting current processes. From a technical standpoint, AI can enable smarter, more personalized use of information technology to aid in workflow efficiency. It also can help develop specific protocols for individual patient needs.

Other uses include screening and processing pharmaceutical agents to save time and resources in clinical trials, and assisting dermatologists, pathologists, and radiologists in correctly classifying disease states.(4)

The evolution of precision medicine is also highly dependent on AI.(65) Training of both burgeoning and experienced clinicians will be important going forward to ensure that applicability of AI in healthcare is maximized.(1) Further integration of AI into healthcare will likely not be without challenges. Clinician acceptance and change management is just one obstacle. Integration and operability, standardization, and regulation are additional hurdles.

This article has consolidated mounting evidence that AI and machine learning have significant benefits over more conventional information technology systems. (See Table 1 for an extensive list of applications of machine learning in different areas of medicine.) AI systems are able to account for variation and have been shown in clinical settings to achieve human-level performance. The implementation of “narrow AI” systems will reduce costs and increase the quality of care without removing the need for the essential “human touch.”

Unlike social networks and other innovations where a screen is placed between individuals and interactions become lower resolution, we believe that AI will make data more understandable, prevent mistakes, and allow the providers to spend more time focusing on high-quality interactions with their patients.

References

  1. Turing AM. Computing Machinery and Intelligence. Mind. 1950;59(236):433–60.

  2. Moor J. The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Magazine. 2006;27(4):87–7.

  3. Patel VL, et al. The Coming of Age of Artificial Intelligence in Medicine. Artificial Intelligence in Medicine. 2009;46(1):5–17.

  4. Amisha PM, Pathania M, Rathaur VK. Overview of Artificial Intelligence in Medicine. J Fam Med Prim Care. 2019;8(7):2328–2331.

  5. Krizhevsky A, Sustkever I, Hinton GE. ImageNet Classification with Deep Convolutional Neural Networks. In 26th Annual Conference — Neural Information Processing Systems (NIPS) 2012. 2012. Lake Tahoe.

  6. Rajpurkar P, Irvin J, Zhu K, et al. Chexnet: Radiologist-level Pneumonia Detection on Chest X-rays with Deep Learning. arXiv preprint arXiv:1711.05225, 2017.

  7. CB Insights. Healthcare Remains the Hottest AI Category for Deals. Research Brief. April 12, 2017. www.cbinsights.com/research/artificial-intelligence-healthcare-startups-investors .

  8. Hamet P, Tremblay J. Artificial Intelligence in Medicine. Metabolism. 2017;69:S36–40.

  9. Podgorelec V, Kokol P, Stiglic B, Rozman I. Decision Trees: An Overview and Their Use in Medicine. J Med Syst. 2002;26(5):445–63.

  10. Charlson M, Szatrowski TP, Peterson J, Gold J. Validation of a Combined Comorbidity Index. J Clin Epidemiol. 1994;47(11):1245–51.

  11. Mnih V, et al., Playing Atari with Deep Reinforcement Learning. arXiv preprint arXiv:1312.5602, 2013.

  12. Silver D, et al. Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature. 2016;529(7587):484–9.

  13. Char DS, Shah NH, Magnus D. Implementing Machine Learning in Health Care—Addressing Ethical Challenges. N Engl J Med. 2018;378(11):981–3.

  14. Molnar C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. [GitHub Repository] 2019; Available from: https://christophm.github.io/interpretable-ml-book/example-based.html .

  15. Karpathy A. PyTorch at Tesla. In PyTorch DEVCON 2019. 2019. San Francisco: PyTorch.

  16. Li G. AI Matches Humans at Diagnosing Brain Cancer from Tumour Biopsy Images. NewScientist. January 8, 2020.

  17. Pool ET, Poole KG Jr, Upjohn DP, Hernandez JS. Agile Project Management Proves Effective, Efficient for Mayo Clinic. Physician Leadersh J. 2019. Mar-Apr;6(2):34–8.

  18. Wen A, Fu S, Moon S, et al. Desiderata for Delivering NLP to Accelerate Healthcare AI Advancement and a Mayo Clinic NLP-as-a-Service Implementation. NPJ Digit Med. 2019;2(1): 130.

  19. Helgeson J, et al., Clinical Performance Pilot Using Cognitive Computing for Clinical Trial Matching at Mayo Clinic. J Clin Oncol. 2018;36(15_suppl):e18598-e18598.

  20. Lin SY, Shanafelt TD, Asch SM. Reimagining Clinical Documentation with Artificial Intelligence. Mayo Clin Proc. 2018;93(5):563–5.

  21. Edwards ME, Blais JD, Czerwiec FS, et al. Standardizing Total Kidney Volume Measurements for Clinical Trials of Autosomal Dominant Polycystic Kidney Disease. Clin Kidney J. 2018;12(1):71–7.

  22. Tizhoosh, HR, Pantanowitz L. Artificial Intelligence and Digital Pathology: Challenges and Opportunities. J Pathol Inform. 2018;9:38.

  23. Robertson S, et al. Digital Image Analysis in Breast Pathology—From Image Processing Techniques to Artificial Intelligence. Translational Research. 2018;194:19–35.

  24. lin, D, Vasilakos AV, Tang Y, Yao Y. Neural Networks for Computer-Aided Diagnosis in Medicine: A Review. Neurocomputing. 2016;216:700–8.

  25. Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. JJ Digit Imaging. 2017;30(4):449–59.

  26. Kooi T, Litjens G, van Ginneken B, et al. Large-Scale Deep Learning for Computer-Aided Detection of Mammographic Lesions. Med Image Anal. 2017;35:303–12.

  27. Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine Learning for Medical Imaging. RadioGraphics. 2017;37(2):505–15.

  28. Weston AD, Korfiatis P, Kline TL, Philbrick KA, et al. Automated Abdominal Segmentation of CT Scans for Body Composition Analysis Using Deep Learning. Radiology. 2019;290(3): 669–79.

  29. Philbrick KA, Weston AD, Akkus Z, Kline TL, et al. RIL-Contour: a Medical Imaging Dataset Annotation Tool for and with Deep Learning. J Digit Imaging. 2019;32(4):571–81.

  30. Korfiatis P, Kline TL, Lachance DH, et al. Residual Deep Convolutional Neural Network Predicts MGMT Methylation Status. J Digit Imaging. 2017;30(5):622–28.

  31. Rajkomar A, Lingam S, Taylor AG, Blum M. Mongan J. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks. J Digit Imaging. 2017;30(1):95-101.

  32. Litjens G, Sanchez CI, Timofeeva N, et al. Deep Learning as a Tool for Increased Accuracy and Efficiency of Histopathological Diagnosis. Scientific Reports. 2016;6(1):26286.

  33. Murphree DH, Ngufor C. Transfer Learning for Melanoma Detection: Participation in ISIC 2017 Skin Lesion Classification Challenge. arXiv preprint arXiv:1703.05235, 2017.

  34. Hart SN, Flotte W, Norgan AP, et al. Classification of Melanocytic Lesions in Selected and Whole-Slide Images via Convolutional Neural Networks. J Pathol Inform. 2019;10:5.

  35. Deufel CL, Furutani KM. Phenomenological Method for Heterogeneity Corrected Eye Plaque Dosimetry Using Line Source TG-43 Functions in Brachyvision TPS. The Mayo Technique? Brachytherapy. 2014;13:S48–S49.

  36. Chen Y, Argentinis JDE, Weber G. IBM Watson: How Cognitive Computing Can Be Applied to Big Data Challenges in Life Sciences Research. Clin Ther. 2016;38(4):688–701.

  37. Medina-Inojosa JR, Attia IZ, Kapa S, et al. Validation of a Deep Learning-Enabled Electrocardiogram Algorithm to Detect and Predict Cardiac Contractile Dysfunction in the Community. Circulation, 2019;140(Suppl_1):A13733–A13733.

  38. Attia IZ, Friedman PA, Asirvatham SJ, et al. Predicting Transient Ischemic Events Using Ecg Data. Mayo Foundation for Medical Education and Research. 2018.

  39. Attia IZ, Kapa S, Lopez-Jimenez F, et al. Application of Artificial Intelligence to the Standard 12 Lead Ecg to Identify People with Left Ventricular Dysfunction. J Amer Coll Cardiol. 2018;71(11 Supplement):A306.

  40. Zou J, Huss M, Abid A, et al. A Primer on Deep Learning in Genomics. Nature Genetics. 2019;51(1):12–18.

  41. Feyissa AM, Britton JW, van Gompel, J. et al. High Density Scalp EEG in Frontal Lobe Epilepsy. Epilepsy Res. 2017;129:157–61.

  42. Iturria-Medina Y, Sotero RC, Toussaint, PJ, et al. Early Role of Vascular Dysregulation on Late-Onset Alzheimer’s Disease Based on Multifactorial Data-Driven Analysis. Nature Communications. 2016;7(1):1–14.

  43. Jones DT, Vemuri P, Murphy MC, et al. Non-Stationarity in the “Resting Brain’s” Modular Architecture. PloS One. 2012;7(6).

  44. Dolin R, Boxwala A, Shalaby J. A Pharmacogenomics Clinical Decision Support Service Based on FHIR and CDS Hooks. Methods of Information in Medicine. 2018;57(S 02):e115–e123.

  45. Inciardi JA, Surratt HL, Kurtz, SP, Cicero TJ. Mechanisms of Prescription Drug Diversion Among Drug-Involved Club- and Street-Based Populations. Pain Med. 2007;8(2):171–83.

  46. Therneau T, Grambsch P. Modeling Survival Data: Extending the Cox Model. New York. NY: Springer Crossref; 2000.

  47. Marwan M, Kartit A, Ouahmane H. Security Enhancement in Healthcare Cloud Using Machine Learning. Procedia Computer Science. 2018;127:388–97.

  48. Park BH, et al. Big Data Meets HPC Log Analytics: Scalable Approach to Understanding Systems at Extreme Scale. In 2017 IEEE International Conference on Cluster Computing (CLUSTER). 2017.

  49. Piramuthu S. Machine Learning for Dynamic Multi-product Supply Chain Formation. Expert Systems with Applications. 2005;29(4):985–90.

  50. Jiang C, Sheng Z, Case-Based Reinforcement Learning for Dynamic Inventory Control in a Multi-Agent Supply-Chain System. Expert Systems with Applications. 2009;36(3, Part 2):6520–26.

  51. Shahrabi J, Mousavi S, Heydar M, Supply Chain Demand Forecasting: A Comparison of Machine Learning Techniques and Traditional Methods. Journal of Applied Sciences. 2009;9(3): 521–27.

  52. Murphree D, et al. Ensemble Learning Approaches to Predicting Complications of Blood Transfusion. Conference Proceedings. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference, 2015. 2015:7222–25.

  53. Chandra A, Rahman PA, Sneve A, McCoy RG, et al. Risk of 30-Day Hospital Readmission Among Patients Discharged to Skilled Nursing Facilities: Development and Validation of a Risk-Prediction Model. J Am Med Dir Assoc. 2019;20(4):444–450.e2.

  54. Harrison AM, Thongprayoon C, Kashyap R, et al. Developing the Surveillance Algorithm for Detection of Failure to Recognize and Treat Severe Sepsis. Mayo Clin Proc. 2015;90(2):166–75.

  55. Kate RJ, et al. Prediction and Detection Models for Acute Kidney Injury in Hospitalized Older Adults. BMC Medical Informatics and Decision Making. 2016;16(1):39.

  56. Thiels CA, Yu D, Abdelrahaman AM, et al. The Use of Patient Factors to Improve the Prediction of Operative Duration Using Laparoscopic Cholecystectomy. Surg Endosc. 2017;31(1):333–40.

  57. Bai M, Pasupathy KS, Sir MY. Pattern-based Strategic Surgical Capacity Allocation. J Biomed Inform. 2019;94:103170.

  58. Kazemian P, et al. Coordinating Clinic and Surgery Appointments to Meet Access Service Levels for Elective Surgery. J Biomed Inform. 2017;66:105–15.

  59. Martinez G, et al. A Data-Driven Approach for Better Assignment of Clinical and Surgical Capacity in an Elective Surgical Practice. In AMIA Annual Symposium Proceedings. 2016. American Medical Informatics Association.

  60. Martinez G, et al. A Coordinated Scheduling Policy to Improve Patient Access to surgical services. In 2016 Winter Simulation Conference (WSC). 2016. IEEE.

  61. Hosseini N, et al. Surgical Duration Estimation via Data Mining and Predictive Modeling: A Case Study. In AMIA Annual Symposium Proceedings. 2015. American Medical Informatics Association.

  62. Hosseini N, et al. Effect of Obesity and Clinical Factors on Pre-Incision Time: Studyof Operating Room Workflow. In AMIA Annual Symposium Proceedings. 2014. American Medical Informatics Association.

  63. Lin RC, Sir MY, Pasupathy K, Comparison of Regression-based and Neural Networks-based Prediction Models for Operating Room Scheduling. In International Conference on Big Data and Analytics. 2014: Singapore.

  64. Lin RC, Sir MY, Pasupathy KS. Multi-objective Simulation Optimization Using Data Envelopment Analysis and Genetic Algorithm: Specific Application to Determining Optimal Resource Levels in Surgical Services. Omega. 2013;41(5):881–92.

  65. Davenport T, Kalakota R. The Potential for Artificial Intelligence in Healthcare. Future Healthcare Journal. 2019;6(2):94.

Daniel J. Quest, PhD

Daniel J. Quest, PhD, is a principal data scientist in Mayo Clinic’s Department of Data Analytics and chairs Mayo Clinic’s Machine Learning and Deep Learning Journal Club.


David Upjohn, MS

David Upjohn, MS, is the operations manager for the Department of Otolaryngology – Head & Neck Surgery at Mayo Clinic in Arizona and an instructor in healthcare systems engineering in the Mayo Clinic College of Medicine and Science.


Eric Pool, EdD, PMP, ITIL

Eric Pool, EdD, PMP, ITIL, is a lead analyst at Mayo Clinic and assistant professor of health care administration at the Mayo Clinic College of Medicine & Science. He is also an instructor at Harvard University and UC Berkeley.


Ronald Menaker, EdD, FACMPE

Mayo Clinic, 200 First Street, SW, Rochester, MN 55905; phone: 507-538-7340; e-mail: menaker.ronald@mayo.edu.


James S. Hernandez, MD, FCAP

James S. Hernandez, MD, FCAP, is an emeritus associate professor of laboratory medicine and pathology and the past medical director of the laboratories, Mayo Clinic in Arizona.


Kenneth Poole, MD, MBA, CPE, FACP

Kenneth Poole, MD, MBA, CPE, FACP, is chair of the Mayo Clinic Enterprise Health Information Coordinating Subcommittee and medical director of patient experience for Mayo Clinic, Scottsdale, Arizona. poole​.kenneth@mayo​.edu

Interested in sharing leadership insights? Contribute



This article is available to AAPL Members and Subscribers of PLJ.

Log in to view.

For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL providers leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)