Blog
AI won't steal your job – but it will make it easier
How to show scientists that machine learning can make the industry more efficient
Lucas Glass, Global Leader, Analytics Center of Excellence, IQVIA
Apr 07, 2022

Machine learning (ML) has significant potential to accelerate and improve drug development and marketing with AI-driven algorithms trained to rapidly scour vast stores of data. It makes it possible to uncover valuable insights and trends buried within that would otherwise take human analysts months to identify.

The impact of machine-driven insights on drug development can be substantial. For example, algorithms have already been used to:

  • Identify undiagnosed rare disease patients in global electronic health records.
  • Identify the best sites based on thousands of past trial outcomes and recruiting trends.
  • Determine why sub-populations of patients respond differently to an experimental drug by reviewing historical study data.
  • Gauge the feasibility of dozens of trial designs to identify the optimal strategy based on a sponsor’s budgets, timelines, and competition.

But none of this is possible unless the stakeholders involved in these projects trust the machine learning algorithm to do its job. If scientists don’t trust the technology, they will ignore its recommendations and lose out on their value. In an industry built on the expertise of human scientists, using insights generated by a machine to support decision-making can be a difficult adjustment.

Hello HAL

A common misconception about machine learning among physicians and scientists is that it means that machines will replace their decision making rather than supporting it. These experts running trials believe they do not need machine learning because they already have the right information to treat their patients. If the industry tries to introduce machine learning into that process without addressing their doubts and concerns, experts may be skeptical of the intent – particularly if the algorithm’s predictions conflict with their own.

Algorithms often make faster, more objective recommendations than human experts – not because they are smarter, but because they can rapidly evaluate high volumes of data , without personal bias, and use it to generate logical conclusions. Algorithms can evaluate a database and deliver a detailed summary of results with predications in a fraction of the time it would take the scientists. But one part of the process that is often underemphasized but integral to comfort with ML adoption is that the final decisions made based on these results are always made by humans.

With a clear understanding of this model, researchers understand that the algorithm is intended to support their work by saving time instead of replacing it. This is valuable because it can free up some of the time they are spending on tedious analytics today and allow them to focus on higher-value, more strategic work.

To bridge this gap in trust, we must help scientific experts understand the technology better through detailed explanations, demos, and/or hands-on testing, allowing them to see the value and time savings it can bring to their work firsthand. Understanding how the algorithms work, in practice, will increase decision makers’ confidence in the alignment between the algorithms’ prediction methods and the approaches that they would otherwise take, themselves. Furthermore, exercises like this could also surface insights that the researchers might not have considered themselves, highlighting not only the efficiency of ML but also the thoroughness of its evaluative processes.

However, these researchers are making high stakes decisions with significant consequences, so they have to have confidence in how algorithms work, what data they use to make predictions, and how they come to conclusions. That clarity can give them the confidence to use this new source of insights in their decision making process.

Where to begin

My recommendation for convincing users that decision intelligence will add value is to begin by explaining that accurate data science isn't enough. Many leaders assume that having solid algorithms and analytic methodologies in place sets them up for success. However, what is most important is what people within the organizations are doing with the results of the output of those algorithms and analyses. Today, they are often underutilized and without an appreciation of their operational value, adoption will stall.

In healthcare and the life sciences, decisions have critical, and sometimes even live-saving, implications. Because of this, it is particularly important for decision makers to understand how and why they are reliable so that they are comfortable using them. With this buy-in, experts confidently turn to algorithms to help them make faster, better-informed decisions to accelerate research, reduce risk, and improve care. That is a value proposition worth fighting for.

Related solutions

Contact Us