top of page
Search

Unlocking AI’s Impact Potential in Healthcare: Bridging the Implementation Gap

  • Impact VC
  • Aug 5
  • 5 min read

AI’s transformative potential in healthcare is clear, but making that impact real hinges on implementation—bridging efficacy, effectiveness, and actual implementation into care settings. 


We were delighted to bring the ImpactVC community together at Google with Dr Patrik Bachtiger, NHS Doctor, NIHR Academic in Digital Health at Imperial’s Health Impact Lab, and Venture Partner at Meridian Health Ventures, who shared his insights into some of the key challenges and emerging opportunities in this rapidly evolving space.



Community members gathered to explore how AI can address real-world healthcare challenges. A huge thank you to our amazing partners at Google Cloud for generously hosting us.
Community members gathered to explore how AI can address real-world healthcare challenges. A huge thank you to our amazing partners at Google Cloud for generously hosting us.

Here are a few of the main takeaways:


AI’s Knowing-Doing Gap

The AI knowing-doing gap in healthcare describes the disconnect between what artificial intelligence technologies can prove in traditional  (“knowing”) versus what research and evidence is needed to actually translate otherwise “proven” AI technologies into real impact for patients and health systems (“doing”).


Despite promising results in research and pilot projects, only a fraction of AI tools achieve widespread clinical adoption and tangible impact. This gap exists because success hinges not just on technical efficacy, but on effective, real-world implementation—navigating clinical workflows, regulatory hurdles, data governance, health economics, and behavioral and social factors. Patrik shared the Health Impact Lab’s approach for bridging the knowing-doing gap by focusing on how AI is integrated into everyday care, ensuring solutions drive measurable benefits for patients and providers, rather than becoming yet another unfulfilled innovation.


Why implementation in health settings is AI’s biggest challenge

  • There’s a proliferation of efficacy studies (demonstrating performance in controlled settings) and effectiveness trials (showing value in broader reality), but far fewer implementation trials rigorously study real-world integration, with very few measuring outcomes like adoption and workflow fit.

  • Contextual complexity: Each implementation environment (primary care vs. specialist, high- vs. low-resource, workflow integration) introduces unique sociotechnical and organisational factors.

  • Guidelines like CONSORT-AI and DECIDE-AI are beginning to shape expectations for robust evaluation, but most deployments still lack a systematic framework for continuous post-deployment oversight and adjustment.


Key Cases and Implementation Lessons


The struggle to scale

1. Moorfields-DeepMind Retinal AI (Nature 2018) 

The groundbreaking Moorfields-DeepMind collaboration demonstrated that an AI could match expert ophthalmologists at diagnosing eye conditions and recommending referrals from retinal scans. Despite this technical success—with the AI making correct referral decisions over 94% of the time—real-world adoption in the NHS and beyond has been very limited – after seven years have elapsed. This ‘failure to scale’ is emblematic of AI's knowing-doing gap: promising efficacy, but low effectiveness and impact due to the complexities of implementation. Lack of robust clinical guidelines tailored to AI deployment, administrative inertia, and absence of clear pathways for regulatory approval have all played roles in slowing adoption.


Implementation in Action 

2. AI-ECG for Heart Failure (Mayo Clinic + Eko)

Evolving from a collaboration between Mayo Clinic,Eko Health, and Imperial’s Health Impact Lab,  a regulatory approved AI-enabled digital stethoscope and ECG device that can detect heart failure with reduced ejection fraction at the point of care. Clinical trials in the US showed it could double the identification rate of patients with impaired cardiac function, supporting earlier intervention and reducing costs.


The device has first-in-kind MHRA approval, with real-world implementation trials underway in London NHS clinics, focusing on practical integration, workflow fit, and economic impact. These efforts are part of the NIHR-funded TRICORDER programme, led by Imperial, which is deploying the device across 200 GP practices to support diagnosis of heart failure.

A study published in The Lancet Digital Health involving over 1,000 NHS patients found the tool achieved 91% sensitivity and 80% specificity, comparable to more invasive and expensive diagnostics. The next phase has been evaluating whether giving GPs access to the tool can improve early detection, reduce emergency admissions, and lower NHS costs, involving 200 practices and covering more than 3 million patients.


The stethoscope has not evolved over the past 200 years, but now AI technology could help speed up the diagnosis and treatment of heart disease.
The stethoscope has not evolved over the past 200 years, but now AI technology could help speed up the diagnosis and treatment of heart disease.

Caitlin Bristol, Investment Director in Impact Ventures at Johnson & Johnson, shared another case study example of implementation in action. In a real-world study across 15 clinics and nearly 40,000 patient visits, Penda Health and OpenAI tested an AI clinical copilot. The tool ran in the background of consultations, reducing diagnostic errors by 16% and treatment errors by 13%. Clinicians reported improved care quality, with no harm identified. Crucially, the project focused on real-world implementation—integrating the tool into clinical workflows and prioritising safety, usability, and post-deployment learning.


ree

Complementing their blog post, they have also shared a full research paper

on the study, AI Consult, and Penda’s deployment.



AI Bias: Core Issues

  • What causes bias? Data representativeness, sampling (majority groups dominating), feature selection, and even model deployment practices can generate bias. Distribution drift and measurement bias also contribute.

  • Where does it appear? At every stage: design, training, and operational use—including post-market environmental changes.

  • Who does it affect? Clinically vulnerable populations, minorities, or anyone underrepresented in the original data.

  • What’s the consequence? Disparate outcomes, erosion of trust, regulatory or clinical pushback, and sometimes direct patient harm if not rigorously monitored.



Implementation Opportunity & Regulatory Dynamics

Public health and prevention at the front end of clinical pathways offer the greatest potential for transformative impact — for example, digital companions supporting adherence to anti-obesity medication.


However, regulators such as the MHRA (UK) and FDA (US) are increasingly overwhelmed by a surge in AI-related submissions, often in the absence of robust frameworks for assessing ongoing safety in real-world settings. This capacity challenge underscores the urgent need for dynamic, iterative regulation and the systematic integration of implementation science.


Key Takeaways

  • Implementation in health settings is AI’s biggest challenge.

  • Studies must evolve from efficacy to effectiveness and, critically, to pragmatic, context-sensitive implementation research.

  • Bias in AI is multifaceted and must be rigorously tracked at all stages of the lifecycle with transparent frameworks.

  • “Winners” will have the rare combination of being highly technologically differentiated plus nail implementation (evidence) strategy.


Further reflection for VCs and founders

  • When engaging with founders and innovators, always ask: what is the specific clinical guideline or deployment pathway for this AI tool? If none exists, encourage them to co-design solutions with regulators and clinicians from the outset.

  • Encourage founders to document and publish real-world implementation studies—messy or not, including failures—as these are essential for evidence-based scaling and long-term sustainability.


Next steps

If you found this session or write-up helpful, we’d love to hear from you. What other health-related topics would you be interested in exploring further? Let us know your ideas for future sessions, dream speakers, or if you’d be keen to host a session yourself—especially on themes at the intersection of health, venture capital, and impact. Please drop Ellie a note (ebroad@bettersocietycapital.com) or drop your ideas in this short form.


 
 
 
bottom of page