Blog
Navigating the AI regulatory maze: Expert perspectives from healthcare executives - Part 2
Mike King, Senior Director of Product & Strategy, IQVIA
Alex Denoon, Partner, Bristows LLP
Chris Hart, Partner, Co-Chair, Privacy & Data Security Practice, Foley Hoag LLP
Sep 05, 2025

The integration of AI in healthcare AI is not a matter of "if" but "when and how."

From machine learning algorithms that enhance diagnostic precision to generative AI solutions that automate the production of draft regulatory content, the healthcare industry stands at a decision gate. The promise of AI is undeniable: faster device and drug development, more tailored medicine and improved patient outcomes. But with this promise comes novel regulatory nuances that must be carefully traversed. As companies in the life sciences sector race against time to implement effective AI solutions, they're confronted with a complex regulatory maze, technical challenges and strategic decisions that will shape the future of patient care.

This was the core of a recent LinkedIn Live panel discussion chaired by journalist Jon Bernstein, where Mike King, senior director, Product & Strategy at IQVIA, was joined by fellow industry experts Alex Denoon, life sciences regulatory partner at Bristows Law Firm, and Chris Hart, litigation partner and co-chair of the Privacy and Data Security Group at Foley Hoag LLP, to offer their insights on navigating the global AI regulatory landscape from a quality assurance and regulatory affairs perspective.

Click to read Part 1
Click to read Part 3

PART 2

The priority hierarchy: what really matters
A revealing poll conducted by IQVIA in the weeks leading up to the LinkedIn Live event identified the top concerns of healthcare companies when implementing AI in healthcare QARA:

    1. Regulation (44%). The complexity of compliance requirements tops the list.
    2. Readiness of data (32%). Companies are beset by data quality, access and bias challenges.
    3. Organizational readiness (20%). Internal data literacy and AI competence is limited.
    4. Cost (4%). Remarkably, cost is lowest on this list of prohibiting factors.

This prioritization is a shrewd acknowledgment that regulatory compliance is the only potential "go to jail" moment, as Alex Denoon said, and thus the highest priority for risk-averse healthcare organizations.

Comprehending the data readiness challenge
The 32% data readiness score accounts for the complex nature of managing healthcare data. Companies often discover that their data infrastructure, while adequate for routine operations, must be significantly enhanced for AI applications. Common challenges include:

    • Data fragmentation. Data is often spread over multiple systems in different formats and standards.
    • Quality issues. Missing values, inconsistent coding and validation problems become critical at the time of AI model training.
    • Legacy system integration. Existing AI tools may not merge smoothly with legacy health information systems.
    • Privacy and security constraints. Companies must establish a balance between providing data for AI creation while protecting mandatory global patient privacy requirements.

Data readiness is even more daunting for multinational corporations operating under multiple privacy regimes (GDPR, HIPAA, etc.) while trying to establish AI systems for cross-jurisdictional operation.

Organizational readiness: The human factor
At 20%, organizational readiness is the highest change management barrier for AI deployment. A significant shift is needed to improve organizational data literacy and accelerate competence in AI usage. These barriers to change include:

    • Skills gaps. There are few professionals with both AI technology and global healthcare regulation expertise.
    • Cultural resistance. Well-established healthcare organizations may be resistant to the adoption of innovative technologies or not recognize a positive risk-benefit to AI deployment.
    • Process redesign. AI deployment tends to require drastic redesigns in existing workflows and quality processes, as well as overhauls of data management practices.
    • Leadership alignment. Companies must ensure AI initiatives receive executive support and strategic alignment with clear understanding of the nuances of operating in a regulated healthcare industry.

The cost paradox
The poor ranking of cost at 4% in IQVIA's survey must be read carefully. It doesn't mean AI deployment is inexpensive, but rather that regulatory and technical challenges loom large over the cost of AI. Healthcare companies realize that the cost of non-compliance or failure in AI implementations could be several times higher than the technology's cost. For severe breaches, the reputational damage could be extremely high.

Also, many organizations view AI investments as strategic priorities that unlock accelerated market access and competitive advantages. The necessity of competing by using AI, coupled with the expectation of meaningful productivity gains, makes AI’s cost a secondary consideration compared to delivering the value of targeted, controlled and successful AI adoption.

Critical success factors
Success with AI is more than a matter of driving a market perception of “technological advancement.” It is about having solid organizational underpinnings that support evolving regulatory requirements while delivering measurable value to patient outcomes and companies alike.

Transparency and governance
Each panelist emphasized that successful AI implementation requires strong governance structures. Chris Hart provided some of the critical components that span technical and organizational realms:

    • Clear accountability. Certain individuals or groups must have independent decision-making authority and direct links to regulators.
    • Transparency. Teams implementing programs must be able to detail what AI is used for, how it operates and what it will decide.
    • Bias management. Programs require continual assessment of algorithmic bias in inputs and outputs.
    • Protection of individual rights. Programs must include safeguards for personal information used for training or affected by outputs.

The regulatory framework will also have to include complete training programs for staff interacting with AI systems. The human aspect is often decisive throughout regulatory audits, wherein officials don't just assess the technology but also the capacity of the organization to manage it responsibly.

Hart had an incisive observation about documentation from a legal perspective: Documentation documents diligence, but it also creates evidence trails that can be very useful if the situation gets out of hand. "Documentation can be double edged," he warned, “so it is important to have legal advice guide one on how to document properly.”

Data strategy as foundation
Mike King emphasized that data strategy precedes technology selection. "What data do we have? What data don't we have? How is it structured, and where is it?" These foundational questions must be answered before selecting appropriate AI tools. The data readiness challenge extends beyond availability to encompass quality, governance and accessibility. Organizations must manage several dimensions of importance:

    • Verification and validity of data. Training data must be handled according to regulatory standards of accuracy and completeness.
    • Data traceability and lineage. Data manipulations and sources must be adequately documented.
    • Data security and privacy. Companies must implement appropriate protections for sensitive health data.
    • Bias detection and mitigation of data. Companies must proactively seek out possible sources of algorithmic bias in training datasets and create controls to address ongoing potential for bias.

King also stressed that companies must know their data environments before implementing AI solutions. “Start with the problem statement or situation first. Then look at the associated healthcare processes and mandated outcomes. Then look at the data that is or isn’t available and understand its structure, quality, congruence and potential bias. Once this is complete, you can then look at solutioning and decide which is the most suitable AI approach for the organization.” He provided a cautionary tale of an organization attempting to use Large Language Models in a specialty product with sparse data to draw from, noting that this specific situation required a different approach due to constraints in data and the situation being solutioned.

Risk-based implementation framework
The panelists consistently advocated risk-based approaches that connect AI deployment to business goals and regulatory requirements. The methodology involves several significant considerations:

    • Problem definition first. Every AI deployment should begin with a clear problem statement and customer need definition. King emphasized starting with "what is the problem statement, or what's the customer concern at present?" before technological solutions.
    • Regulatory mapping. Understanding what rules apply to given use cases prevents costly compliance failure. Established frameworks (e.g., medical device governance, in-vitro diagnostic governance, drug guidelines) and new AI-specific requirements should be examined and understood by organizations.
    • Cost-benefit analysis. While cost was the lowest-ranked priority in the survey at 4%, sound AI deployment is founded on value proof and cost of return on investment.
    • Organizational capability assessment. Successful implementation depends on having the appropriate skills, processes and cultural readiness for AI adoption.

One important takeaway from Alex Denoon emphasized the rationale for implementation: "Ensure you have a logical reason for what you're doing now. It may very well be a little bit wrong, but at least you understand why you're doing what you're doing and why you're doing it now. If you haven't got a logical reason for what you're doing now, it'll become more complicated later, and you'll likely regret it.”

Related solutions

Contact Us