PERSPECTIVE ARTICLE
I’m sorry, Dave, I’m afraid I can’t do that” is the iconic statement from an AI computer named Hal in the movie 2001: A Space Odyssey.
In healthcare, we have arrived at a stage where many who provide care are saying “I’m afraid I can’t do that” about implementing and adopting AI. They also worry that AI is going to alter their role or reduce their influence in medical care. For example, there is a fear that physician interpretation of radiologic images will become unnecessary in the next decade — a fear that is compounded by the concern among medical professionals that mistakes or fabrications generated by AI will harm patients and society at large.
We need to understand that technology change is actual cultural change. Adopting, implementing, and using any novel technology that represents a disruption and potential threat or risk in one’s day-to-day life and career will result in anxiety in most individuals and perhaps excitement in a few. This has always been the challenge in innovation diffusion. Everett Rogers’s original work in agriculture, which resulted in his diffusion of innovations theory, informed us that inventors seek out those like-minded innovators and early adopters who are novelty-seeking risk-takers to implement their disruptive ideas and technologies.
In August 2025, the MIT Media Lab published the GenAI Divide: State of AI in Business 2025.(1) Their research studied over 300 publicly disclosed AI initiatives from 52 organizations. They found that in surveyed companies over 90% of employees were personally using AI, but that 95% of AI pilots were failing.
Our philosophy and the MIT research demonstrate that it is incumbent on entrepreneurs, leaders, and implementers to approach AI adoption through the lens of people and organizational considerations. Emotions, risk-driven reasoning, and group behavior will be the main drivers for the speed of adoption, but not necessarily the quality or benefit of an AI solution or technology. Once again it is the combination of people, processes, and technology within a workflow that must come together to drive success.
We have written about people and organizational issues in the past, and those main constructs and ideas can and should be used to usher in this new age of technology that is riveting, exciting, and absolutely frightening.
Most of our past work involved health information–based systems. Electronic health records were the magic bullet for multiple years. The early failures and current challenges of health information systems stem from not paying as much attention as needed to the people and organizational issues that affect implementation and adoption.
When we talk about how organizations incorporate AI into their daily workflow, there are four key issues to consider:
People and human factors. The goal of AI is to help people and support them, not to compete with them or to take over their roles. However, a radiologist has more education than “reading” images. In this scenario, the future of AI in radiology is to enhance the efficiency of the radiologist and, in some cases, augment their process of pattern recognition and diagnostic formulation. The radiologist is an educated physician who has medical knowledge beyond imaging and can delve deeper into the issues for other professionals who will be treating the patient.
Communication and interaction with patients. Someone with medical knowledge and the ability to empathically communicate with a patient is critical for the patient’s success. If a patient understands what they need to do and is motivated to follow the prescribed treatment, they are more likely to do so. Recent research has highlighted the effectiveness of AI in providing information to patients. Although AI models may be trained on voice, context, facial recognition and language models, will they be trained on psychological mechanisms such as denial, resistance, or transference? That is the unique skill set of empathic physicians.
Organizational issues. Historically, organizations had “paper” back-up for electronic medical records when the system went down. If AI tools and processes become pervasive and integrated with day-to-day tasks, organizations will need to determine what would happen if the system were to go down. Organizations must have protective systems or processes in place in case issues arise with the AI tools.
Ethical issues. Finally, and perhaps most importantly, the issue of ethics arises. Hal’s response to not opening the door when requested in 2001: A Space Odyssey presented an ethical issue. AI systems are “trained” with known information. The large dataset used to train an AI system can only be as good as the data and the “values and morals” embedded in the dataset. Therefore, some critical issues may not be included in the training material, and, consequently, there will be problems with the outcome. As a society, popular movies such as “The Terminator” and “The Matrix” give us a dystopian glimpse into what happens when machines make life-and-death decisions without human intervention, and some machines will rely on humans to power their AI. Safe and ethical guardrails must be built into AI systems.
We are optimists and not dystopianists, and we believe that as long as we pay attention to people, and to the organizational and ethical issues in all of our AI efforts, a new age of efficiency, prosperity, innovation, and discovery awaits us. Let’s be safe and responsible as we thoughtfully usher in a new future for healthcare and humanity at large.
Reference
Challapally A, Pease C, Raskar R, Chari P. The GenAI Divide: State of AI in Business 2025. MIT Media Lab. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

