Blog
Navigating the AI regulatory maze: Expert perspectives from healthcare executives - Part 3
Mike King, Senior Director of Product & Strategy, IQVIA
Alex Denoon, Partner, Bristows LLP
Chris Hart, Partner, Co-Chair, Privacy & Data Security Practice, Foley Hoag LLP
Sep 05, 2025

The integration of AI in healthcare AI is not a matter of "if" but "when and how."

From machine learning algorithms that enhance diagnostic precision to generative AI solutions that automate the production of draft regulatory content, the healthcare industry stands at a decision gate. The promise of AI is undeniable: faster device and drug development, more tailored medicine and improved patient outcomes. But with this promise comes novel regulatory nuances that must be carefully traversed. As companies in the life sciences sector race against time to implement effective AI solutions, they're confronted with a complex regulatory maze, technical challenges and strategic decisions that will shape the future of patient care.

This was the core of a recent LinkedIn Live panel discussion chaired by journalist Jon Bernstein, where Mike King, senior director, Product & Strategy at IQVIA, was joined by fellow industry experts Alex Denoon, life sciences regulatory partner at Bristows Law Firm, and Chris Hart, litigation partner and co-chair of the Privacy and Data Security Group at Foley Hoag LLP, to offer their insights on navigating the global AI regulatory landscape from a quality assurance and regulatory affairs perspective.

Click to read Part 1
Click to read Part 2 

PART 3

Practical Implementation Tips
The panelists provided practical, implementable advice for organizations just starting out with AI or wanting to improve existing implementations.

1. Start with documentation
Documenting every process and the progress made is a key element in providing proof of action, supporting auditing and serves several essential functions:

    • Regulatory compliance. Demonstrate due diligence and structured thinking to regulators during inspection or approval.
    • Organizational learning. Facilitate institutional memory that supports ongoing improvement.
    • Risk management. Demonstrate reasonable decision-making processes in event of contingencies.
    • Change management. Facilitate knowledge transfer, because teams do turn over and projects expand.

Chris Hart cautioned, “Documentation must be developed in tandem with legal counsel so that they provide safeguards against liability.”

2. Focus and deliver value
Mike King suggested a disciplined, focused approach: "Focus exclusively on one or two of the most valuable use cases for your company and precisely define the problem statement at hand. Rather than trying to ‘boil the ocean,’ select specific, high-value applications, gain results, sell the value internally and reinvest benefits back into some new projects." This process-oriented approach has the following advantages:

    • Risk reduction. It limits initial exposure while developing organizational capabilities.
    • Demonstration of success. Creating tangible evidence of AI’s value leads to internal buy-in.
    • Optimization of resources. It concentrates scarce funds and talent on high-value projects.
    • Accelerated learning. It creates organizational capabilities that can be applied to subsequent projects.

King further stated that, “the organizations that attempt to revolutionize themselves through full AI overhauls struggle with complexity and resource constraints, but the ones that start with strategic implementations progressively build robust capabilities over time.”

For businesses seeking short-term value, Alex Denoon suggested advertising and claims management as areas where customers receive "the biggest bang for their buck" with instant productivity gain and clear benefits.

3. Strategic standards engagement
Both staying current with new requirements and helping create them become prime success drivers during AI implementation. "Participate in helping devise and draft and develop standards and guidance,” Alex Denoon counseled. “And if you don't have bandwidth to participate, as a minimum, keep them on your horizon. These standards are going to come and bite." Such an engagement strategy has numerous benefits:

    • Early insight. Ensure understanding of regulatory direction before it becomes finalized.
    • Opportunities for influence. Help influence standards and regulations so they represent your idea of pragmatic implementation realities.
    • Building networks. Learn by interacting with other organizations that have the same challenges.
    • Competitive edge. Help the company get ahead of pending requirements before they become obligations.

Organizations must track standards development in various bodies, such as ISO, EMA, FDA guidance documents, EU technical standards committees and a range of global regulators.

    • Cross-functional coordination. Demolish silos between IT, regulatory affairs, quality assurance and business units to align AI efforts with company goals and broad regulatory compliance.
    • Training and education. Develop AI literacy and data literacy within the organization, not just within technical teams. Regulatory experts should understand AI capabilities and limitations, and AI developers should understand global regulatory needs.
    • Change management. Get the organization ready to adopt new workflows, decision-making processes and quality control processes AI deployment usually requires.
    • Vendor management. Develop expertise to evaluate AI vendors, establish whether vendors can guarantee compliance with global regulations, and manage long-term vendor relationships in a regulated environment.

4. Managing bias and building trust
The issue of algorithmic bias elicited heated discussion among panelists, a reflection of its relevance in healthcare applications where unfair judgments can directly impact patient safety and health equity.

Asked how regulators are supposed to know whether an AI system is biased without having exactly the right inputs or data, the response was consistent: transparency, governance and bias management systems are essential preconditions for regulatory clearance. "If you can't explain what's going on, then you don't deserve to get the endorsement," Denoon said. “Organizations must have strong monitoring systems throughout the whole AI lifecycle, from development to deployment and ongoing operation.”

Managing sources of bias and mitigation
King restated the bias question in pragmatic terms: "AI is programmed and executed by humans who've made a choice regarding the programming or the software that's going to be behind it, the data set you train on, the data for validation, et cetera, et cetera. So, the question isn't ‘Is there a bias?’ There is. The question is, ‘How do you eliminate that bias?’"

King offered a compelling real-world example involving pulse oximeters and AI models shown to overestimate oxygen saturation in darker-skinned individuals because the training dataset was biased. This example illustrates how seemingly neutral medical devices can exacerbate health disparities unless bias is proactively removed. The solution requires multi-pronged remedies:

    • Pre-deployment bias assessment. Test extensively in diverse populations and usage scenarios before regulatory submission.
    • Monitoring in operation. Continually evaluate AI outputs for indications of bias in decision-making.
    • Monitoring after the market. Ensure institutional gathering and analysis of real-world performance data to identify bias not apparent during development.
    • Corrective action processes. Institutionalize processes for taking action on identified bias, e.g., model retraining and deployment updates.

The AI-monitoring-AI model
Large healthcare providers are already doing this, using "several different tools all day, every day, to cross-interrogate the tools they're using to triangulate" findings and raise potential issues. This redundancy of effort provides both technical protection and regulatory benefit.

Regulatory Evolution, Adaptation and AI Adoption
Most healthcare regulators are already starting to use AI tools to assess AI applications, Denoon said. This indicates that regulatory agencies view AI not just as a technology to be regulated, but also as a way to further their own oversight capabilities.

This regulatory adoption creates interesting dynamics in which AI systems can be reviewed by other AI systems, potentially creating more acumen and precise regulatory reviews than are possible in traditional human-exclusive review processes.

Hart emphasized that "the regulatory landscape is going to shift in some pretty unexpected ways, both in terms of which jurisdictions are going to be particularly aggressive, and other jurisdictions, which might want to be innovative." This evolution will be accompanied by challenges and opportunities for innovative organizations.

Certain locations may become regulatory pioneers, developing comprehensive templates that other countries follow. Others may position themselves as AI-friendly to attract AI creation and deployment. Organizations must monitor these trends and potentially alter their global strategies and business operations accordingly.

The rapid pace of technological innovation also means that current regulatory frameworks can quickly become outdated or in need of substantial revision. Additionally, robust regulatory oversight requires surveillance of vertical healthcare regulations (e.g., medical device, in-vitro diagnostic and drug specifics) as well as horizontal healthcare regulations that cut across many industries (e.g., AI regulations, data privacy, environmental health and safety). Successful companies will build flexibility into their plans for AI compliance so they can easily respond to new obligations without affecting core operations. Ultimately, this approach will enable a dual focus on patient safety and commercial performance while minimizing business risks that could adversely affect global market access activities.

Looking Forward
The session ended with recognition that the regulatory environment will continue to change in unexpected ways, forcing organizations to build adaptive skills instead of rigid compliance structures.

Building Future-Ready Organizations
Surviving the new AI regulatory landscape will require businesses to simultaneously innovate and comply. This capability is based on a variety of critical organizational characteristics:

    • Adaptive governance frameworks. Traditional hierarchical decision-making could lag too much for the speed of AI development. Organizations need governance frameworks that react quickly to emerging technology and shifting rules while not compromising on appropriate oversight.
    • Cross-functional experience. The intersection of AI technology and healthcare regulation calls for people who have experience in both arenas. Organizations will need to invest in developing this mixed expertise through training, recruitment and collaboration.
    • Scenario-planning capabilities. Due to regulatory uncertainty and global variance, organizations must develop several compliance scenarios and build strategies that stand up in the presence of different regulatory outcomes.
    • Technology readiness. AI tools can become key for managing regulatory complexity. Organizations must develop capabilities to use AI to apply to regulatory intelligence, track compliance and handle documentation.

The Future
The healthcare revolution of AI is not around the corner — it's already unfolding in organizations around the world. The discussion is no longer “if” or “when” to implement AI, but “how” to implement it responsibly with an increasingly dynamic regulatory landscape. To succeed, there must be a careful interplay of bold innovation and global regulatory prudence. Businesses must be visionary enough to push forward the revolutionary potential of AI but disciplined enough to ensure patient safety, product quality and global compliance remains on the right side of evolving regulations.

The takeaways from the panelists offer a realistic roadmap: begin with clearly stated problem statements, establish solid data and AI governance frameworks, maintain systematic documentation and stay laser-focused on measurable results. Most of all, view this as a continuous learning and adaptation process, and not a destination.

In closing, the panel offered the following strategic framework for AI implementation within healthcare organizations:

As Alex Denoon asserted, winners with "the biggest bang for their buck" are those firms with AI for near-term, low-hanging fruit uses, such as ad optimization and claims handling, while concurrently building long-term capability. This pragmatic strategy for succeeding in the near term while building long-term capacity is a sensible direction forward in an uncertain but promising world. Organizations that can integrate visionary innovation and operating discipline will be best placed to unlock AI's transformational power while maintaining the trust and safety healthcare demands.

“The regulatory maze may be complex,” concluded Mike King. “But with proper preparation and strategic thinking, it is entirely navigable. Those who approach AI strategically will be best positioned to realize its transformative potential while ensuring patient safety and regulatory compliance. The future belongs to those who advance steadily yet confidently to seize opportunity while fulfilling duty to the most important customer of all in healthcare — the patient.”

Related solutions

Contact Us