Clinician-facing AI tools are shifting from passive data capture to active decision support. Tandem Health is at the frontier of this change, and its Director for the UK and Ireland, Dr Katie Baker, is working to ensure that AI augments, rather than erodes, clinical expertise.
A quiet revolution is happening in healthcare, and it is not being led by surgical robots or predictive diagnostics. It is unfolding in the consultation room, where AI is beginning to influence how clinicians listen, think, and make decisions. At the heart of this shift is the rise of AI scribes, tools that automate clinical note-taking in real time. For Tandem Health’s Dr Katie Baker, these technologies pose both an opportunity and a warning.
“AI can provide tremendous support in pattern recognition, data retrieval, and process optimisation, but clinical judgment involves nuance, emotional intelligence, and contextual awareness that machines cannot yet replicate,” Dr Katie Baker, Director of UK & Ireland at Tandem Health, explains. “Today, the boundary lies at interpretation. AI might suggest a likely diagnosis or draft clinical notes. Still, the human clinician is the one who understands the patient’s wider context, weighs ethical considerations, and makes the final call.”
The stakes are higher than workflow efficiency. Clinical training itself risks being undermined by over-automation. The process of writing notes is not just administrative; it is cognitive. It teaches diagnostic reasoning and reflection. Remove that, and you erode the scaffolding of medical education. “If that becomes entirely automated without oversight, we risk losing an essential part of how clinicians learn and think,” Baker says. “Clinicians should always review and edit the AI’s output, especially during training. Over time, we may need to adapt educational models to ensure that while AI supports administrative efficiency, it does not replace core learning experiences.”
The distinction between augmentation and outsourcing is increasingly blurred in the rush to adopt intelligent assistants. For Baker, the line is defined by intent and control. The relationship remains healthy as long as the clinician retains oversight and the technology is used to accelerate rather than replace their thinking. “They save time, but the clinician retains full control over what’s recorded and ultimately signed off,” she adds. “Clear boundaries, transparent workflows, and human-in-the-loop design are crucial to keeping AI a collaborative partner rather than a replacement.”
Trust lives in the margins
The most advanced AI tools in healthcare do not diagnose or operate. They listen, transcribe, and support. Yet, their impact on the patient-clinician relationship is profound. These tools now shape how clinicians engage with patients and, by extension, how patients experience care. “When an AI scribe handles note-taking, the clinician can maintain eye contact, listen more deeply, and focus entirely on the patient in the moment. That’s a more human interaction, not less,” Baker says. “So, while AI shouldn’t be in the foreground, it can support and protect the human connection.”
Trust, however, is fragile. Experienced clinicians are often sceptical of tools that overpromise, particularly if they disrupt familiar workflows. Baker says the key to adoption is not in dazzling accuracy but consistent transparency. “Trust hinges on reliability, transparency, and clinician autonomy,” she explains. “If an AI tool is opaque, inconsistent, or interferes with a clinician’s judgment, trust quickly erodes. At Tandem Health, we’ve addressed this by involving clinicians from the ground up, like co-designing our tools, offering real-time control, and ensuring transparency around how decisions are made.”
Explainability in this context is not about understanding the algorithms but the behaviour. Clinicians want to know what the tool can do, how it performs, and when to step in. The analogy with medicine itself is helpful. “It’s similar to how we use medications; we don’t all need to know the molecular interactions, but we need to understand side effects and efficacy. For AI, that means clear communication about accuracy rates, how feedback is used to improve the system, and when human oversight is needed.”
The infrastructure problem no one talks about
Deploying AI scribes at scale may appear straightforward on the surface. In practice, it is anything but. The real obstacles lie not in machine learning models but in integration, latency, and system interoperability. The challenges are infrastructural, not technological.
“Integration with Electronic Health Records is a significant hurdle, systems vary widely and are often not optimised for external tools,” Baker continues. “Then there are data security and compliance requirements, especially in regions like the UK with GDPR. Latency is another key issue; clinicians can’t afford to wait for a response. Even small delays can disrupt the flow.”
This emphasis on real-world constraints is central to Tandem’s approach. It is not enough to build a system that works in theory. It must work in consultation rooms, on busy wards, and within the constraints of legacy platforms. “Our technical strategy is built around real-time performance and flexibility,” Baker adds. “Low latency is non-negotiable, so we optimise our AI models and processing pipelines to deliver outputs within seconds. On interoperability, we focus on designing APIs that align with NHS and international standards to ensure smooth integration.”
These lessons extend far beyond healthcare. In any industry where human outcomes are at stake, whether in finance, aviation, or education, deployment success depends on fitting into human systems, not asking those systems to bend to the technology. “One key lesson is that AI deployment is not just about capability; it is about context,” Baker explains. “Involving frontline workers in development, ensuring transparency, and providing human override mechanisms are essential to building trust. Lastly, emphasising regulation and ethical frameworks in healthcare could be a model for responsible AI deployment across other industries.”
Changing the culture of adoption
While the infrastructure demands attention, the cultural dynamics of healthcare are equally decisive in shaping AI’s future. Resistance is not born out of technophobia but out of experience. The sector has seen a succession of digital interventions that disrupted more than they helped. “Healthcare has a long memory and a deep-rooted respect for evidence and tradition,” Baker notes. “Many clinicians have seen tech come and go, often overpromising and underdelivering. That scepticism is understandable.”
Yet something is shifting. The pandemic forced digital change, and with it came a new appreciation for tools that genuinely help. For younger clinicians, digital tools are not add-ons but expected features of the clinical environment. “The key is meeting the culture where it is now, with humility, and proving value through real-world results,” Baker says.
That value, Baker argues, should not be measured solely in time saved. The real benefit of AI scribes lies in enabling better care, not just faster documentation. By lifting the cognitive load, AI can help clinicians reclaim the mental bandwidth needed to reflect, observe, and connect. “Time-saving is the metric, but the real impact is deeper,” she explains. “When clinicians are not rushed, they can think more clearly, spot subtleties in patient behaviour, and build trust. Reflection is essential for good clinical practice, it’s how we improve, catch errors, and grow.”
However, AI should not become a shortcut to preserve broken systems. There is a growing risk that AI tools will simply accelerate dysfunction rather than help redesign care delivery. “If we simply apply AI to make an inefficient system faster, we’re missing the bigger opportunity,” Baker continues. “But used thoughtfully, AI can reveal where the system is failing and open conversations about how to redesign it. The goal is not just automation; it is transformation.”
Looking beyond the interface
The future of clinical AI is not just about what it captures but how it collaborates. As these tools evolve, Baker sees the line between note-taking and clinical reasoning beginning to blur. “Imagine a tool that not only transcribes but understands the context, flags missing elements, and offers clinical prompts based on what’s being discussed,” she continues. “It won’t replace thinking but will guide and refine it.”
Such a system would shift from post hoc documentation toward real-time reasoning support, an assistant that listens and helps clinicians stay present without taking over. However, as capability increases, so must the ethical guardrails. “Yes, compliance matters, data security, GDPR, clinical safety,” Baker adds. “But responsible deployment also means understanding healthcare work’s emotional and practical realities. Does the tool reduce stress? Does it respect the clinician’s autonomy? Does it make patients feel heard?”
That requires design decisions rooted in ethics, empathy, and a long-term understanding of what it means to care. One principle, for Baker, should guide it all. “Enhance, never replace. Every AI tool should be built to make clinicians more effective, not less essential,” she concludes. “We should always be asking, how does this improve the care experience for both the clinician and the patient?”
The promise of AI in healthcare is not in superseding clinical intelligence but in protecting the space where intelligence, intuition, and empathy meet. The tools are improving. But the real innovation may be in what we choose to do with the time, attention, and trust they help us win back.




