Blog
Navigating the AI regulatory maze: Expert perspectives from healthcare executives - Part 1
Mike King, Senior Director of Product & Strategy, IQVIA
Alex Denoon, Partner, Bristows LLP
Chris Hart, Partner, Co-Chair, Privacy & Data Security Practice, Foley Hoag LLP
Sep 05, 2025

The integration of AI in healthcare AI is not a matter of "if" but "when and how."

From machine learning algorithms that enhance diagnostic precision to generative AI solutions that automate the production of draft regulatory content, the healthcare industry stands at a decision gate. The promise of AI is undeniable: faster device and drug development, more tailored medicine and improved patient outcomes. But with this promise comes novel regulatory nuances that must be carefully traversed. The clock is ticking on the implementation of the EU AI Act, and as companies in the life sciences sector race to implement effective AI solutions, they're confronted with a complex regulatory maze, technical challenges and strategic decisions that will shape the future of patient care.

Figure 1: EU AI Act Timeline - https://www.pwc.de/en/risk-regulatory/responsible-ai/navigating-the-path-to-eu-ai-act-compliance.html

This was the core of a recent LinkedIn Live panel discussion chaired by journalist Jon Bernstein, where Mike King, senior director, Product & Strategy at IQVIA, was joined by fellow industry experts Alex Denoon, life sciences regulatory partner at Bristows Law Firm, and Chris Hart, litigation partner and co-chair of the Privacy and Data Security Group at Foley Hoag LLP, to offer their insights on navigating the global AI regulatory landscape from a quality assurance and regulatory affairs perspective.

This three-part blog post series explores the discussion among this panel of experts, outlining both the exciting opportunities and significant challenges healthcare organizations face as they embrace AI technologies.

Part 1 explains that AI is already in play and starting to impact healthcare and the supporting supply ecosystem. It also looks at regional regulatory differences, important and often complex elements that must be considered when integrating AI in operations.

In Part 2, the panel focuses on the priority hierarchy for AI implementation, the challenges of being “data-ready” and “organizationally ready,” and critical success factors.

Finally, in Part 3, the panel closes out with practical tips for implementation, managing sources of bias and mitigation, regulatory evolution and AI adoption and adaptation, and looks forward to the future.

PART 1

The AI transformation is already here
“Each person I work with in the life sciences sector is either already deploying, set to deploy or looking where best to deploy AI," said Alex Denoon, capturing the widespread commitment to AI across the industry. This accelerating sea change from curiosity to active take-up is fueled by three key drivers:

    1. Operational efficiencies. Organizations want to streamline operations, maximize utilization of workforces and automate suitable activities.
    2. Quality enhancements. When deployed correctly, AI ensures higher precision and superior output over time than traditional human systems.
    3. Market imperative. Companies realize that failure to adopt AI solutions that drive tangible value could leave them behind in the market.

Mike King offered valuable historical context, pointing out that, although AI has existed for almost 70 years, the new wave is qualitatively different. "Right from the beginning, individuals in the computer field were looking to figure out if we could mimic human behavior with AI," he said. “The progress has been nothing short of phenomenal: from early computer vision and pattern recognition to today's advances in Large Language Models, machine learning techniques and novel agentic AI systems that one day will perhaps function with greater independence.

"Programmers and those who can utilize AI tools are very much at the leading edge of a whole variety of industries," King said, further explaining how AI is shifting from a specialized technical skill to a broader mainstream business advantage. This democratization of AI capability has been fueled both by public awareness of AI technology itself and greater awareness of the human impact of product quality issues, for example, from malfunctioning medical devices like metal-on-metal hip implants to deceptive tactics like the Poly Implant Prothèse breast implant scandal. This combination of broad awareness of the capabilities of AI and the potential impact of product quality issues is in part fueling a public call for the use of AI to drive better, safer and more effective healthcare solutions.

The convergence of this public awareness with accessible AI technologies has created what King calls "exciting points of innovation" and simultaneously highlights the urgent need for good governance and regulation in the highly regulated healthcare industry.

A tale of two regulatory approaches
The regulatory landscape varies starkly across jurisdictions, offering an intricate compliance puzzle for global organizations. There are differences that must be understood by international businesses or companies building global market access strategy.

United States: Accelerating innovation
Chris Hart described the new U.S. approach as a dramatic shift from safety-focused to competitiveness-focused regulation. The Biden Administration's sweeping 2023 AI safety executive order was immediately withdrawn under the Trump Administration and replaced by a greater emphasis on AI competitiveness and innovation in international markets.

"The crude way of putting it is that it's uncertainty and chaos. The elegant way of putting it is that it's a work in progress," Hart said. “Not to claim complete deregulation — regulatory frameworks such as FDA regulation of medical devices remain robust and vital. But there's tremendous uncertainty about priorities for enforcement and the development of new state regulations.”

The federal landscape remains multi-layered, and different agencies still maintain jurisdiction over various aspects of AI adoption. Consumer protection regulations remain applicable to AI-generated claims and representations, and privacy and security guidelines remain applicable to handling of data. But the enforcement approach and priority areas may change radically.

State regulation adds yet further complexity. Federal legislation initially envisioned a moratorium on state regulation of AI, but the provision ultimately was removed so that states can establish their own frameworks. Hart explained that various states have experimented with doing different things, from transparency requirements to governance specifications to consumer protection for rights, with the result being a patchwork of compliance obligations.

Europe: Extensive and complicated
By contrast, Europe has implemented wide, principle-based regulatory frameworks. "Europe loves to regulate," Denoon explained, referencing the EU AI Act and EU GDPR as examples of large-scale schemes, which other countries then mimic.

“The EU AI Act is the most innovative AI regulation in the world that uses a risk-based classification system where the applications of AI are classified into banned, high-risk, restricted risk and low-risk,” Denoon continued. “In healthcare organizations, this classification has significant implications. Almost all medical devices that require the involvement of notified bodies are automatically high-risk AI devices for which additional approvals are concurrent to standard medical device schemes.”

But not all of healthcare is within scope of the AI Act: "Medicines are not, delivering healthcare is not, conducting clinical trials is not," Denoon said. That creates interesting jurisdictional limits where pharmaceutical development, clinical trials and treatment delivery could fall beyond the immediate purview of the AI Act, yet medical devices incorporating AI are fully extra-regulated.

Denoon confidently predicted delay in implementation of the Act due to poor preparation and absence of infrastructure: "They forgot to establish the infrastructure required for the AI Act to actually work." Despite such implementation problems, European frameworks offer a key strategic advantage for international organizations. "If you conform to them, you win a huge share of the world, because many countries coast on European models," he added.

The global complexity challenge
King compared the broader regulatory complexity to a four-dimensional Rubik's Cube: "Differing products in differing markets are governed by different regulations, and all of these are changing in an ongoing way over time." Global quality and regulatory teams must deal with, simultaneously, vertical regulations (medical devices, in vitro diagnostics, drugs, biologics) and horizontal regulations for AI, data protection, cybersecurity, environmental health and safety, and wireless communications. Organizations wanting to deliver healthcare solutions in global markets need to look beyond the often well-understood U.S. and European requirements to the intricacy of local variances in regulations and product standards.

This global variance presents both a challenge and an opportunity, the challenge being the weight of potentially crippling complexity in compliance and the opportunity being that AI itself can help companies navigate this same complexity via smart regulatory intelligence systems and AI-enabled workflows and automation.

Related solutions

Contact Us