Jun 26, 2025
When a four-year-old boy in the U.S. began to experience recurring toothaches and headaches — followed by issues with his growth, balance and gait — physicians were baffled.
He saw paediatricians. He saw neurologists and orthodists. In total, he saw 17 different healthcare professionals over the course of three years, misdiagnosed again and again.
Then his mother described his symptoms to ChatGPT. The tool suggested something that no physician had: Tethered Cord Syndrome, a condition where the tissue in the spinal cord forms attachments that limit movement of the spinal cord, causing it to stretch abnormally.
The AI diagnosis proved correct. It showed how the traditional doctor-patient relationship is changing; now, there’s increasingly a third party in the room. And also this is an N=1, it’s one of many similar stories in recent years — and that, at minimum, should give us pause to reflect on the implications.
Physicians, though, are cautiously optimistic about AI in healthcare; an American Medical Association study conducted first in 2023 and replicated in November 2024 found that a growing majority recognise the benefits of AI in delivering care (68%, up from 63% in 2023).
It also found that usage of AI tools in U.S. healthcare has almost doubled — from 38% to 66%. This is an unusually high pace of adoption for healthcare technology. Key benefits most commonly cited by physicians (n=1,183) include reducing stress & burnout, improving work efficiency and preventing cognitive overload.
Patients’ attitudes are more diverse; a small 2023 U.S. study found that a small majority of patients were resistant to AI-powered medical diagnoses, adding to existing literature which suggests that patients are less trusting of advice generated by a computer.
It seems the effect is mitigated, however, when a healthcare professional points out that AI tools are proven to be more accurate in particular diagnoses.
AI as an active participant in healthcare
For centuries, the physician’s authority has been built upon two pillars: their clinical knowledge (the epistemic role) and their ability to guide the patient through care with empathy and wisdom (the humanistic role).
AI, and particularly Large Language Models, are now assuming many of the epistemic functions with staggering proficiency. They can synthesize vast amounts of literature, identify patterns in complex data, and generate differential diagnoses at a scale no human can match.
As AI automates knowledge-based tasks, the physician's role becomes more humanistic. We’re being pushed toward what ethicists Emanuel and Emanuel describe as the interpretive and deliberative models.
The interpretive model focuses on the physician acting as a counselor, helping the patient clarify and articulate their values to make informed decisions about treatment. The deliberative model, on the other hand, involves a more active discussion between physician and patient, with the physician also weighing in on which values are most worthy and should be pursued.
This forces a profound redefinition of the clinician's value. They are becoming the interpreters and mediators of AI-generated insights, the ethicists who contextualise data-driven recommendations, and the counsellors who help patients navigate choices that align with their personal values. It’s a fundamental shift from traditional models of care.
AI's Impact on the Core of Care
This role shift is not abstract; it is actively reshaping the most critical components of the clinical encounter.
Shared Decision-Making (SDM), a cornerstone of modern patient-centred care, is being transformed. AI systems can analyse an individual's data to generate highly personalised treatment options, complete with statistical estimates of risks and benefits.
This allows clinicians and patients to engage in a more transparent, evidence-based dialogue. The AI can handle the complex task of simplifying medical jargon into plain language, ensuring the patient truly understands their diagnosis and options.
Simultaneously, tools like ambient AI scribes — which listen to conversations and automatically generate clinical notes — are restoring the human connection in the examination room. By freeing the physician from the tyranny of the keyboard, these tools enable more direct eye contact, more focused conversation, and a stronger therapeutic alliance.
Patient surveys confirm the effect: when AI scribes are used, doctors spend less time looking at their computers and more time engaging directly with the person in front of them.
The Consequences of Unmanaged Integration: Trust, Bias, and Accountability
This new tripartite relationship — doctor, patient, and algorithm — is not without significant risk. Patients harbour legitimate fears of misdiagnosis, data breaches, and a loss of human contact. Navigating this requires a new level of governance. And for physicians and clinical decision-makers, it raises new questions: what does it mean to be a doctor? How will new power dynamics shape the medical profession? [Doctor perspective?]
Trust and Transparency becomes paramount. Clear, honest communication about when, how, and why AI is being used is non-negotiable. This is complicated by the current limitations of the technology.
As recent studies have shown, while LLMs show great promise, they can still provide inaccurate information in complex cases, from colon cancer management to anaesthetic planning. Blind trust is not an option. Ethical and safety requirements must be permanent; physicians must lead, always — as opposed to being the ‘human in the loop’.
Algorithmic bias presents another pressing threat. If AI models are trained on datasets reflecting historical inequities, they will learn and perpetuate those same biases, leading to disparities in care for marginalised populations. This is not a technical flaw; it is a moral failure that erodes public trust.
Finally, accountability for AI-influenced decisions remains a complex ethical and legal challenge. When an error occurs, determining responsibility between the developer, the institution, and the clinician is dangerously ambiguous. Establishing clear accountability frameworks is a vital prerequisite for safe integration.
A Mandate for Action: Optimising AI Integration
Healthcare stakeholders must now champion strategies that embed human values into technology.
For healthcare organisations, establishing robust AI governance frameworks is fundamental. This must be a comprehensive "design-to-decommission" strategy, encompassing policies for procurement, validation, ongoing monitoring, and user training. Critically, these frameworks must be adaptive, capable of evolving with a technology that is advancing at an exponential rate.
For AI developers, human-centred design principles must be the priority. Usability, interpretability, and ethical safeguards cannot be afterthoughts; they must be core to the design process. Frameworks like FUTURE-AI — which demand Fairness, Usability, Robustness, and Explainability — provide a strong foundation for building systems that clinicians and patients can trust.
Ultimately, the central question for any AI implementation must be the one posed by the research: is this in the patient's best interest? The goal of automation is not technology for its own sake, but the pursuit of better, safer, and more compassionate care.
This transformation is not optional; it is already underway. If we fail to manage it with foresight, we risk not only untenable delays to care and eroding public trust, but a crisis of identity for the medical profession itself.
Jun 26, 2025
Will the algorithm will see you now?
Jun 17, 2025
Delphyr x InterSystems
May 28, 2025
Rethinking the MDR
Apr 2, 2025
Delphyr M1: Small & Powerful
Dec 4, 2024
Delphyr Receives Funding from INH
Aug 1, 2024
Team Expansion: Frederick
Jun 1, 2024
Team Expansion: Robin
Jan 7, 2024