Who is held accountable when AI systems make mistakes in medicine? (© BiancoBlue | Dreamstime.com)
AI is diagnosing patients, but doctors are still held responsible
In a nutshell
Doctors are being unfairly blamed for AI errors. Even when an AI system provides incorrect guidance, physicians are often held more responsible than the AI developers, institutions, or tools themselves—highlighting a dangerous accountability gap.
Physicians face an impossible balancing act. They must decide when to trust AI and when to override it, navigating between overreliance and skepticism—all while lacking clear regulations or legal protections.
Supporting doctors is crucial. The solution isn’t perfect AI, but better training, clearer standards, and shared responsibility so that physicians aren’t expected to be “superhuman” in managing machine-generated decisions.
AUSTIN, Texas — Doctors are increasingly being asked to use AI systems to help diagnose patients, but when mistakes happen, they take the blame. New research shows physicians are caught in an impossible trap: use AI to avoid mistakes, but shoulder all responsibility when that same AI fails. This “superhuman dilemma” is the healthcare crisis nobody’s talking about.
The Doctor’s Burden: Caught Between AI and Accountability
New research published in JAMA Health Forum explains how the rapid deployment of artificial intelligence in healthcare is creating an impossible situation for doctors. While AI promises to reduce medical errors and physician burnout, it may be worsening both problems by placing an unrealistic burden on physicians.
Researchers from the University of Texas at Austin found that healthcare organizations are adopting AI technologies much faster than regulations and legal standards can keep pace. This regulatory gap forces physicians to shoulder an extraordinary burden: they must rely on AI to minimize errors while simultaneously bearing full responsibility for determining when these systems might be wrong.
Studies reveal that the average person assigns greater moral responsibility to physicians when they’re advised by AI than when guided by human colleagues. Even when there’s clear evidence that the AI system produced wrong information, people still blame the human doctor.
Physicians are often viewed as superhuman. They are expected to have exceptional mental, physical, and moral abilities. These expectations that go far beyond what is reasonable for any human being.
When Two Decision-Making Systems Collide
AI systems are used in the medical field to support doctors, but they sometimes only add stress.
(Photo by National Cancer Institute on Unsplash)
Physicians face a complex challenge when working with AI systems. They must navigate between “false positives” (putting too much trust in wrong AI guidance) and “false negatives” (not trusting correct AI recommendations). This balancing act occurs amid competing pressures.
Healthcare organizations often promote evidence-based decision-making, encouraging physicians to view AI systems as objective data interpreters. This can lead to overreliance on flawed tools. Meanwhile, physicians also feel pressure to trust their own experience and judgment, even when AI systems may perform better in certain tasks.
Adding to the complexity is the “black box” problem. Many AI systems provide recommendations without explaining their reasoning. Even when systems are made more transparent, physicians and AI approach decisions differently. AI identifies statistical patterns from large datasets, while physicians rely on reasoning, experience, and intuition, often focusing on patient-specific contexts.
The Hidden Costs of Superhuman Expectations
The consequences of these expectations affect both patient care and physician wellbeing. Research from other high-pressure fields shows that employees burdened with unrealistic expectations often hesitate to act, fearing criticism. Similarly, physicians might become overly cautious, only trusting AI when its recommendations align with established care standards.
Deciding to trust in AI adds another layer of pressure for healthcare professionals. (Maridav/Shutterstock)
This defensive approach creates problems of its own. As AI systems improve, excessive caution becomes harder to justify, especially when rejecting sound AI recommendations leads to worse patient outcomes. Physicians may second-guess themselves more frequently, potentially increasing medical errors.
Beyond patient care, these expectations take a psychological toll. Research shows that even highly motivated professionals struggle to maintain engagement under sustained unrealistic pressures. This can undermine both quality of care and physicians’ sense of purpose.
Moving Forward: Shared Responsibility and Better Training
The solution isn’t just about making AI systems more trustworthy. Healthcare organizations need to equip physicians with skills and strategies to calibrate their trust effectively.
Practical approaches include:
Implementing standardized practices like checklists for evaluating AI recommendations
Systematically tracking outcomes from AI-assisted decisions to identify patterns of reliability
Integrating AI simulation training into medical education and hospital programs
Creating a culture of shared responsibility rather than individual blame
The doctors of tomorrow will inevitably work alongside AI, but they shouldn’t carry the burden alone. By creating environments where responsibility is shared, training is prioritized, and perfection isn’t expected, we can ensure that artificial intelligence enhances rather than undermines the human intelligence that has always been at the heart of good medicine.
Paper Summary
Methodology
This paper is a viewpoint that synthesizes findings from multiple disciplines, including healthcare AI implementation, decision psychology, human-AI interaction, and research on professional expectations and burnout. The authors examined studies on physician-AI interactions in real-world settings and research on blame attribution in AI-assisted medical decisions.
Results
Key findings include: (1) Physicians bear disproportionate responsibility for outcomes regardless of AI system performance; (2) There exists a fundamental mismatch between AI’s statistical pattern recognition and physicians’ narrative reasoning approach; (3) Unrealistic expectations on physicians lead to both defensive medicine and psychological burnout.
Limitations
As a viewpoint rather than a systematic review, this paper reflects the authors’ perspectives rather than a comprehensive evaluation of all literature. The rapidly evolving nature of healthcare AI means some challenges identified may be addressed by future technological or regulatory developments. The focus is primarily on diagnostic AI rather than administrative applications.
Discussion and Takeaways
The viewpoint connects findings on “superhumanization” with physician wellbeing and performance. It argues for shifting from making AI trustworthy to equipping physicians with calibration skills, implementing standardized evaluation practices, tracking outcomes systematically, and incorporating AI training into medical education.
Funding and Disclosures
The authors reported no conflicts of interest related to this viewpoint. No specific funding sources were mentioned for the preparation of this paper.
Publication Information
The paper, “Calibrating AI Reliance—A Physician’s Superhuman Dilemma,” was authored by Shefali V. Patil, Ph.D.; Christopher G. Myers, Ph.D.; Yemeng Lu-Myers, MD, MPH from the University of Texas at Austin. It was published in the JAMA Health Forum on March 21, 2025 (Volume: 6, Issue: 3).