Blog
Understanding and Embracing Generative AI for Life Science Safety
Updesh Dosanjh, Practice Leader, Pharmacovigilance Technology Solutions, IQVIA
Marie Flanagan, Director, Offering Management, Vigilance Detect, IQVIA
Jul 13, 2023

Generative AI has democratized AI in a manner unseen previously. The open availability of Generative AI tools, such as ChatGPT, Google’s Bard, and Bing’s search engine chatbot, makes the applications for these tools much broader and accessible than previous AI solutions, which were restricted to those with expert knowledge. This widespread availability has brought Generative AI into the focus of the public and media. As life science companies grapple with increasing data volumes and complexity, they are seeking to understand how these tools can be used to improve patient safety and regulatory approval processes. In leveraging Generative AI, there’s a huge opportunity for Life Science Safety Teams to be able to search public data in an interesting and more user-readable way than with the manual efforts of the past.

The Potential of Generative AI

In terms of benefits for Drug Safety, Generative AI offers accessibility of data and unlocking information that will enrich the entire chain of Safety operations, from drug discovery processes to post-market surveillance and beyond. With Generative AI, Safety teams don’t have to click through articles and read the entire piece to review them for safety implications. Instead, users can get a direct answer to any specific questions they’re trying to answer surrounding the analysis of safety data from digital sources. Using intelligent search and response functions, Generative AI tools enable Safety teams to potentially seek out and reach new data sources with minimal burden compared to the safety strategies employed today.

Generative AI can also provide different ways of looking at data and come up with a visualization that humans wouldn’t. A worker with no Microsoft Excel or graphic design skills could approach a Generative AI tool with a request for a graph to demonstrate the historic levels of adverse events (AEs) associated with a specific product. This enables Life Science Safety Teams to look at data more efficiently without possessing data science skills or training with a specific workplace tool.

Further, there is an opportunity to remake and customize open-source Generative AI platforms that have meaningful safety insights. This customization will enable the tools to better inform signal management, risk activities and ultimately provide functionality that can supplement gaps in real world data analysis capabilities today. Most companies that IQVIA has spoken to about Generative AI think it’s going to be wonderful for unlocking new potential and efficiency within safety operations, but they also try to understand the risks associated with the use of generally available GAI tools and the necessary guardrails that need to be established.

The Risks of General, Externally Available Generative AI

AI is not well regulated today, although governments and regulatory agencies around the world are beginning to explore the regulation and use of Generative AI across certain critical industries, including life science. For example, the FDA recently called for input from the life science industry on the use of AI in drug development, including the development of medical devices intended to be used with drugs, to help inform the regulatory landscape in this area.

One of the biggest concerns around Generative AI is the risk that any responses generated by the tools could contain false information or even fake references. If not properly vetted, this information could derail and misguide safety analysis and conclusions. The chances of generating responses containing incorrect data varies depending on which Generative AI solution is used. This makes it difficult to assess the validity of responses and even more challenging to cite real world references to back up claims.

There is also the issue of explaining how answers are produced by Generative AI tools. Most tools are trained on vast volumes of data pulled from various digital sources and websites. Identifying exactly why a specific response was generated by Generative AI technology is difficult, but regulatory authorities will not be satisfied with an unexplainable analysis. Despite these risks, the technology behind Generative AI is advancing rapidly, and new developments are being rolled out on a nearly weekly basis.

Mitigating Risk Is Not Eliminating Risk

There are ways to mitigate the risk involved in using Generative AI tools of both today and the future. However, organizations should keep in mind that mitigating risk does not equate to eliminating risk.

Generative AI in the public sphere is where Google’s search engine was years and years ago when it came out. It’s serving the same purpose as well: simplifying the search for information, and the processing and analysis of said information. Before Google, web users had to sort through long lists of results to find the site or topic they were looking for. Google made that process much faster, turning the web into a sort of indexed resource that could be effectively leveraged to uncover information. Even if you weren’t sure exactly what to search for to get the desired result, Google had a way of figuring that out for you.

Generative AI offers the same benefits to Life Science Safety Teams who are struggling to scrape the web for any and all potential safety signals that could impact the efficacy or safe use of their products. That said, not everyone knows how to use these types of tools. Even today, there are people who are good at using Google and people who are not. For example, some people still search Google by asking it a direct question, such as “how many life science companies operate in the EMEA region?” Whereas a knowledgeable user who searches Google regularly might instead use a query such as “life science companies EMEA.” Generative AI users will also range in their knowledge of the tool and how to use it. Just as with Google, there are specific ways to pose questions to Generative AI technology to generate the needed output.

Understanding how to pose questions is key to the success of leveraging Generative AI tools. It’s likely that the life science industry is entering a new era, in which companies and their individual departments/teams will have a dedicated researcher who is trained to use Generative AI, just like teams today have dedicated researchers to uncover safety signals and potential risks. However, just as Google doesn’t release its search algorithm, Generative AI tools don’t tend to release their coding details. This makes even the most carefully posed questions subject to receiving misinformation in the generated response. We run the risk of replacing time consuming manual information compilation with time consuming validity checking.

There are also risks associated with the use of public data and Generative AI tools. By querying a batch of products, Safety teams could be letting bad actors know what they’re looking for, opening opportunities for misinformation spreaders to inject false data into the analysis. Also, like Google, there will undoubtedly be bad actors and manipulators who figure out how to game the system – just as SEO optimizers and misinformation spreaders learned how to boost Google search results through specific terms and metrics. The industry must find a way to prevent misinformation from being spread on Generative AI.

IQVIA Is Building Tools for Life Science Teams to Maximize Value of Generative AI

While there are definitely challenges and risks to using Generative AI in life science operations, there are also significant benefits that can be taken advantage of if the risks are properly mitigated. IQVIA recognizes the risks in external tools and is currently working on developing best practices and tools designed for the life science industry to mitigate the risks associated with Generative AI.

Specifically, IQVIA is working to build a Generative AI solution that can be controlled, assured and will sit on top of known datasets. For Safety teams, this tool will be able to create draft evaluation reports, signal detection analysis, and identify trends in data.

Until such a solution is available, use the best practices outlined in this blog, be aware of the risks, and always remember to vet and evaluate any Generative AI responses for validity. If you’re interested in discussing GENERATIVE AI with IQVIA, please contact us at SafetyPV@iqvia.com

 

 

Related solutions

Contact Us