Blog
How to approach risk when implementing RBQM
Gayle Hamilton, Director, RBQM, Digital Trial Management Suite
Mar 20, 2023

Risk-based approaches are now embedded in many clinical trial protocol processes, yet it is still common for many sponsors and CROs to run trials without identifying critical risks and quality tolerance limits, while also continuing to perform 100% source data verification. Additionally, with all of the processes to be completed within a clinical trial, it is easy for researchers to put off incorporating a risk-based approach because of the thought, "if it’s not broken, don’t fix it." That thought process creates a significant problem however, and continuing to run protocols the way they were developed even 15 years ago is a broken process, especially when considering time, cost, and quality implications.

Implementing a risk-based approach can be intimidating. There is a general belief that it is extremely time consuming to set-up and that it requires an engineering degree to sort and analyze the data.

No risk-based engineering degree? Here’s guidance on where to begin.

Industry regulations advise that all risks need to be in one risk plan. Therefore, it's a matter of determining what risk processes are already part of your standard operating procedures and documenting those cohesively. Typically, sponsors and CROs have a go-to list of risks for specific phases of clinical trials. If you already have these risks set-up, that is a great place to start.

Regulators also advise identifying risks from different perspectives. All levels of risk – site, subject and trial risks – are important to consider as you build your risk plan.

SITE RISKS

Site risks are generally the easiest to understand the need to implement because, historically, we are already performing these reviews. Site risks we’ve been tracking that may be familiar to most project teams are:

  • Aging queries
  • Protocol deviations
  • Adverse events
  • Serious adverse events
How we track these risks with criticality in mind, though, is different... But how different?

Data normalization is key to be able to compare sites equivocally. Prior to risk-based approaches, we only calculated the numbers of Queries, PDs, AEs, and SAEs. Now, with a risk-based approach in mind and an eye towards criticality, we look to see how long the site has been open and how long the subjects have been active in the trial to make those calculations. This allows for us to see which sites are truly at risk on a continual basis.

SUBJECT RISKS

After implementing the standard risks and the site risks, logically, we want to know the subject risks related to the protocol. These are the critical data/processes or CDPs that are fundamental to a risk-based approach. How are they identified? How are they tracked? These risks specifically can seem like an enormous task to both identify and track – and it can be, unless you have the right system in place that does the bulk of the work for you. The right system should be based on decades of historical data and experience brought together into libraries of risks at the indication level for the project team to select from, making the process quick and decisive, even for risk-based naïve teams. If you don't have such an intuitive system in place, the protocol derisking task can indeed be cumbersome and time-consuming.

Regulations advise the identification and review of subject levels risks should be completed from a cross functional review, involving operational leads from several project team roles to:

  • Converge and agree on what data is critical
  • Provide the traceability for the data selected as critical
  • Determine what data can be reviewed by what project team members – alone or in conjunction with each other – and then what part of the data each role will review

If the system can perform the bulk of the identification for the project team, only minor time and tweaks are needed to make the data specific to the protocol.

TRIAL RISKS: QUALITY TOLERANCE LIMITS (QTLS)

The most recent risks to be designated as part of the risk-based approach are the trial level risks – Quality Tolerance Limits or QTLs. This is the “data that matters” to the endpoint analysis. Recent regulatory updates don’t specifically call out the requirement to identify QTLs for all clinical trials, however, they remain important to identify. Without QTL data, the trial likely won’t have a proper endpoint analysis, potentially resulting in a terminated trial or limited submissions to regulatory authorities.

Circling back to having the right system that enables libraries of historical data to be considered, such a system is perfect for identification of QTLs at the indication level. The requisite historical data reviews applied into the indication level of data can provide guidance into which QTLs are advised for different therapeutic indications.

Examples of QTLs are provided by TransCelerate (a risk-based consortium who originated the proof that risk-based approaches work), however, tried and true experience (libraries of QTLs built-in to a system) provide an even smoother and smarter experience when you apply the QTLs that typically are associated with specific therapeutic areas and/or indications.

SUMMARY

In summary, risks need to be identified so they can be acted upon, if and when data may be breached. The risk-based system that is used can make or break the risk-based experience. A great system can integrate data across a risk plan and into monitoring plans to support dynamic monitoring and data analysis. If using a limiting system, the process likely requires an exponential amount of manual work, increase trial costs, and only provide pieces of the data while lacking quality tracking verses a comprehensive view of the risks. Identifying system capabilities before investing will help you to make the best decision and likely save future time and costs to critical to quality data.

Related solutions

Contact Us