Insights are trapped in mountains of text. NLP sets them free.
As a doctor who has worked in the field of AI, particularly language-based AI, I have been as excited as anyone about the AI revolution brought on by generative AI. Headlines such as “ChatGPT Beats Doctors at Medical Exam” or “AI Better at Diagnosing Than Clinicians” can be misleading. While they highlight exciting AI advancements, they often oversimplify the nuanced role AI plays in healthcare decision-making.
I recall a letter written to The Lancet in 2019 by Professor Antonio Di Leva, where he said, “Machines will not replace physicians, but physicians using AI will soon replace those not using it.1” This statement frames AI as an assistant or a tool – think of our doctors as having a stethoscope in one hand and AI in the other. Or perhaps, as we enter the dawn of agentic AI, clinicians as members of teams comprising both human colleagues and AI agents.
We are living in an interesting paradigm where the pace of technological innovation seems to be continually accelerating, while the adoption of new tools in healthcare needs to align with regulatory, ethical, and patient safety considerations. Many promises seen in pilots are not playing out at large scale in clinical practice. From what I see today, success is primarily in the administrative space – reducing manual, laborious tasks – and less so in the clinical domain.
Where we are seeing significant adoption is in the use of AI to reduce the administrative burden on clinicians. The most well-documented examples are in the use of ambient AI2 to reduce documentation burden and “pajama time” for clinicians3. In this scenario, listening devices capture physician-patient interactions and automatically create clinical documentation based on the synthesis of the conversation. Here, AI is not replacing the clinician but making their life easier by reducing the administrative burden.
However, we are yet to see widespread adoption in areas that typically require more critical thinking by clinicians – the higher-risk decision-making. While AI can assist with critical decision-making by synthesizing complex data, its role should be to augment rather than replace clinical judgment. What is critical here is that the support is provided in a trustworthy and reliable way. One of the biggest fears I have as a clinician is misinformation. The very basis of effective prevention and treatment is accurate, evidence-based information – either from the scientific domain, such as clinical guidelines, or from the patient, such as an accurate history taken from the patient and subsequent investigations. If I ask an AI agent to provide me with a summary of my patient’s drug history, I want to be sure the information is accurate and ideally know how confident my AI colleague is in giving me that information. This is a major limitation of current foundation models, which provide hallucinated information with as much authority as they provide accurate information.
The recent advancements in medical reasoning models have brought the capability of "critical thinking” to the fore at far cheaper costs than previously thought possible. This is an area in which IQVIA has recently demonstrated best-in-class performance (read our related blog here). As these models begin to show more promise, and the evidence builds that they can help inform clinical decisions, a repeatable, transparent and trustable approach to their deployment is becoming an increasingly pervasive need.
Our Healthcare-grade AI® is designed to meet the specific needs of the healthcare and life sciences industries by combining expertise in healthcare, science, and AI; adaptable AI technology with the capability to fine-tune and validate models; and unparalleled quality health data.
In applying Healthcare-grade AI® – we are helping healthcare organizations realize the immense value and potential of AI to support in everyday patient care. Below are two examples of putting Healthcare-grade AI® into practice:
Challenge: Social determinants of health (SDOH) such as employment status, financial situation, and stress are crucial for predicting health outcomes, yet they are often only found in unstructured physician notes. For instance, NorthShore University HealthSystem (now Endeavour Health) discovered that a mere 0.1% of patient records included these details in coded information, compared to 30% within clinician notes.
Solution: IQVIA supported NorthShore by leveraging Healthcare-grade AI®:
Results: NorthShore can now bridge care gaps by identifying and screening 56% more at-risk patients, enabling social workers to spend 4 times as much time with patients instead of reading volumes of medical records (read more here7).
Challenge: Atrial Fibrillation (AFib) patients are five times more likely to have a stroke. IQVIA partnered with the UK National Health Service to reduce AFib-related strokes by identifying at-risk patients.
Solution: The risk of stroke for AFib patients was predicted using three key principles:
Results: Annual strokes were reduced by approximately 22% during the implementation phase compared to the prior period. This also led to an estimated reduction in healthcare costs, amounting to annual savings of approximately $2 million (read more here8).
It is not a bold claim to suggest that AI will be a part of all clinicians’ lives in the next 5-10 years. Instead of framing this eventuality as “when” versus “if” – I think it is more important to ask “how” AI will be deployed in healthcare. The three areas mentioned above serve as good guiding principles on ensuring safe, compliant and impactful deployments. With all of this, I am hopeful that in a couple of years, I will look back on this blog and be writing a reply along the lines of “10 Ways AI/Clinician Partnerships Are Improving Patient Lives.”
1. https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(19)32626-1/fulltext
2. https://www.forbes.com/sites/saibala/2024/08/26/ambient-ai-is-having-its-moment-in-healthcare/
3. https://pmc.ncbi.nlm.nih.gov/articles/PMC6712097/
4. https://www.iqvia.com/blogs/2024/10/a-blueprint-for-defensible-ai
5. https://www.iqvia.com/blogs/2024/08/making-genai-reliable
6. https://arxiv.org/pdf/2502.18992
7. https://www.iqvia.com/library/case-studies/nlp-closing-care-gaps-using-social-determinants-of-health
8. https://www.iqvia.com/locations/united-states/library/case-studies/reducing-risk-of-stroke-for-afib-patients
How IQVIA is making the promise of Healthcare-grade AI® a practical reality
Insights are trapped in mountains of text. NLP sets them free.
Secure your platforms and products to drive insights from AI and accelerate healthcare transformation.