Blog
Building Trust through Governance: Realistic AI Deployment in Drug Safety
Part 1: The Challenge: More Than Generic AI Governance
Marie Flanagan, Regulatory and AI Governance Lead, IQVIA Safety Technologies
Dec 02, 2025
Building Trust through Governance: Realistic AI Deployment in Drug Safety

The increasing adoption of Artificial Intelligence (AI) in drug discovery and development can fundamentally revolutionize drug safety functions. However, healthcare's highly regulated environment demands more than generic one-size-fits-all governance approaches.

In this two-part blog, Marie Flanagan, Regulatory and AI Governance Lead, IQVIA Safety Technologies, explores how IQVIA's deployment of AI governance into pharmacovigilance (PV) systems has valuable lessons for pharma companies working in this complex environment.

Part 1 outlines the current challenges and why a generic AI governance framework simply doesn’t work for the diverse global healthcare industry, introduces IQVIA Vigilance Platform with AI Assistant, looks at the need for, and benefits of, a multi-layered governance approach and outlines technical implementation considerations.


Part 1: The Challenge: More Than Generic AI Governance

Generic AI governance models are not suitable for use in the healthcare industry. Such is the nature of the industry that using a one-size-fits-all blanket solution would open providers up to risk. Drug safety systems have extremely specific needs and therefore require domain-specific solutions in areas of evolving regulations, customer concerns and mitigating specific patient safety risks.

As shown in Figure 1 below, the global regulatory ecosystem continues to evolve, and with that comes great complexity. Pharmaceutical companies must deal with international regulations from the EU AI Act to FDA regulations, and many other regulatory bodies worldwide, and balance within existing drug safety standards of different jurisdictions.

Figure 1: A selection of the vast array of global regulation

Effective AI governance depends on the integration of business, technical and compliance factors from the very beginning. The multidisciplinary nature of drug safety reporting – form-based structured forms, unstructured emails, call center transcript inputs and literature reviews – is the kind of challenge generic AI solutions fail to meet. It is in these challenging operating environments that IQVIA’s deep expertise really comes to the forefront. With more than a decade of achievements with automation and AI, IQVIA has been driving the application of AI-based tools and systems in drug development and health care delivery systems.

Specializing in governance and cross-functional expertise by design, through ideation-development-deployment-routine use, adhering to the OECD principles and evolving drug/device regulatory requirements in high-risk, IQVIA Vigilance Platform with AI Assistant is employed in PV where precision impacts patient safety. With visibility to the latest information, IQVIA is helping to define the AI policy landscape and have industry-leading regulatory understanding in over 110 countries, using a risk-based assessment and transparency controls to carefully balance human judgment and AI autonomy and feedback mechanisms to evolve dynamically in response. Let’s look in more detail at the platform:

  • What does Vigilance Platform AI Assistant do? It is a generative AI model used to review structured and unstructured source documents, identify safety events and extract individual case safety reporting entities (data fields)
  • What does it not do? It does not involve any training or fine-tuning of a Large Language Model (LLM). IQVIA securely prompts responses from a commercial LLM. No customer data is used to train any current or future version of a LLM.
  • How does it help? The model outputs (extracted data fields) are used to initiate and direct the case processing workflow in a global safety database, replacing manual review and manual data entry steps.

A Multi-Layered Governance Approach

Applying OECD AI principles as the foundation of the governance model and overlaying necessary healthcare-specific layers demonstrates how current systems can be engineered for regulated markets without lowering basic effectiveness. As referenced above, this framework has at its center risk-based assessment and transparency measures balancing human intervention against AI autonomy. Rather than using oversight as an option, the system embeds it as core capability driving all operation dimensions. IQVIA follows the guiding AI principles of:

  • Risk-based approach
  • Human oversight
  • Validity and Robustness
  • Transparency
  • Data Privacy
  • Fairness and Equity
  • Governance and Accountability

In addition, IQVIA operationalizes trust through:

  • Respect
  • Auditability
  • Staying current with regulations

The Importance of Domain-based Expertise

Domain-based frameworks address drug and device regulation requirements through deep integration with existing pharmacovigilance processes. This involves educating on case processing workflow dynamics and creating AI outputs with quality for regulatory content submissions. Figure 2 below outlines the process for constructing domain-specific AI governance models.

Figure 2: Building Domain-Specific AI Governance

The platform’s security framework prevents customer information from falling into commercial models, addressing pharmaceutical companies' top AI issue: subjecting sensitive patient data to risk or giving a competitive advantage by enabling competitors to access confidential information Real-time feedback monitoring enables dynamic realignment based on real-world performance by document types, categories of treatment, and changing regulatory requirements.


Technical Implementation: Security by Design

The technical architecture of Vigilance Platform AI Assistant defines how governance requirements are converted into action in the form of concrete implementation decisions that optimize security, safety, transparency, and adherence to regulations while maintaining performance. In the case of IQVIA, GPT-4 runs in a secure enclave with definite guarantees that no customer data trains the model. This transcends inherent privacy and intellectual property concerns paramount to pharmaceuticals, isolating sensitive information from greater AI training regimes.

Prompt engineering rather than model fine-tuning minimizes privacy concerns without compromising effectiveness. The approach avoids typical fine-tuning shortcomings while leveraging sophisticated techniques to customize AI behavior to specific use cases. Field-level confidence scoring offers fine-grained direction for human review. Instead of single document scores, the system produces confidence ratings at the level of individual data fields, allowing for precise oversight alongside accelerated high-confidence extraction.

Adjustable thresholds governing automated workflow decisions allow organizations to customize automation-human review balance as a function of risk tolerance and regulatory need. Comprehensive audit trails offer traceability and visibility throughout processes. Each AI decision, human revision, and workflow action is logged for regulation audits, sustained improvement, transparency, and continuous validation. Figure 3 below shows how AI safety governance is embedded from development to deployment.

Figure 3: The Safety AI Governance Pathway

 

Part 2 is available here.

Related solutions

Contact Us