Exploring the impact of design criteria for reference sets on performance evaluation of signal detection algorithms: The case of drug–drug interactions

Purpose To evaluate the impact of multiple design criteria for reference sets that are used to quantitatively assess the performance of pharmacovigilance signal detection algorithms (SDAs) for drug–drug interactions (DDIs). Methods Starting from a large and diversified reference set for two‐way DDIs...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Pharmacoepidemiology and drug safety Ročník 32; číslo 8; s. 832 - 844
Hlavní autori: Kontsioti, Elpida, Maskell, Simon, Pirmohamed, Munir
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Chichester, UK John Wiley & Sons, Inc 01.08.2023
Wiley Subscription Services, Inc
Predmet:
ISSN:1053-8569, 1099-1557, 1099-1557
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Purpose To evaluate the impact of multiple design criteria for reference sets that are used to quantitatively assess the performance of pharmacovigilance signal detection algorithms (SDAs) for drug–drug interactions (DDIs). Methods Starting from a large and diversified reference set for two‐way DDIs, we generated custom‐made reference sets of various sizes considering multiple design criteria (e.g., adverse event background prevalence). We assessed differences observed in the performance metrics of three SDAs when applied to FDA Adverse Event Reporting System (FAERS) data. Results For some design criteria, the impact on the performance metrics was neglectable for the different SDAs (e.g., theoretical evidence associated with positive controls), while others (e.g., restriction to designated medical events, event background prevalence) seemed to have opposing and effects of different sizes on the Area Under the Curve (AUC) and positive predictive value (PPV) estimates. Conclusions The relative composition of reference sets can significantly impact the evaluation metrics, potentially altering the conclusions regarding which methodologies are perceived to perform best. We therefore need to carefully consider the selection of controls to avoid misinterpretation of signals triggered by confounding factors rather than true associations as well as adding biases to our evaluation by “favoring” some algorithms while penalizing others.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1053-8569
1099-1557
1099-1557
DOI:10.1002/pds.5609