A significant amount of time and energy has been invested in recent years exploring the desirability (do we want it?), feasibility (can we do it?), and viability (is it worth it?) of integrating open source solutions into our clinical data pipelines which transform source data into clinical study reports and submission data packages. In this October edition of the Open Source Open Forums, we will update on the status of this initiative and continue to hear from you on what we’ve missed so far.
When this manuscript is complete, we hope to put to rest some of the burning questions that we believe we now know the answers to. This will allow industry, and all the passionate people in it, to look ahead and start tackling the next horizon of challenges related to using open source solutions for clinical data pipelines. We hope you will contribute your expertise to this effort.
When considering sources of RWD, it is important to consider more than just the number of available patients. Recently released guidance from the FDA encourages researchers to build accurate, complete and traceable real-world datasets. Combining data from structured and unstructured electronic health record data, closed claims, and other sources is essential to building the patient journey.
The use of artificial intelligence (AI), including machine learning (ML), technologies across all stages of the drug product life cycle may accelerate the delivery of safe and effective high-quality drugs. As this data-driven technology continues to rapidly evolve across the landscape of drug development, a responsive regulatory approach may be warranted to calibrate the requirements needed to meet safety and evidentiary standards. This responsive regulatory approach can be based on an assessment of model risk, which is estimated by examining AI models’ influence on regulatory decision-making and the potential consequences of wrong decisions if the model is inaccurate. This responsive regulatory approach is rooted in an in-depth understanding of the specific application context and calibrates regulatory requirements in accordance with model risk.
In 2019, the PHUSE Best Practices for Data Collection Initiatives project team, in conjunction with the Analysis and Display of Safety Analytics project team, conducted a survey (link) to study the variation in the collection and definition of treatment emergent adverse events (TEAEs) in clinical studies. It noted the need to pursue additional research to further harmonise industry practices. The PHUSE Adverse Event Collection Recommendations and the Treatment Emergent Definitions Recommendations project teams were formed to develop recommendations to reduce the implementation variability.