Studying and mitigating the effects of data drifts on ML model performance at the example of chemical toxicity data

Uloženo v:
Podrobná bibliografie
Název: Studying and mitigating the effects of data drifts on ML model performance at the example of chemical toxicity data
Autoři: Morger, Andrea, de Lomana, Marina Garcia, Norinder, Ulf, Svensson, Fredrik, Kirchmair, Johannes, Mathea, Miriam, Volkamer, Andrea
Zdroj: Sci Rep
Scientific Reports, Vol 12, Iss 1, Pp 1-13 (2022)
Informace o vydavateli: Research Square Platform LLC, 2021.
Rok vydání: 2021
Témata: 0301 basic medicine, Science, Molecular Conformation, 301207 Pharmazeutische Chemie, Article, Machine Learning, 03 medical and health sciences, 106005 Bioinformatik, 301211 Toxicology, Bioinformatics (Computational Biology), 0303 health sciences, RECEPTOR, Drug discovery, PDBBIND DATABASE, CONFORMAL PREDICTION, data drifts, Computational biology and bioinformatics, Machine learning (ML) models, 301211 Toxikologie, Calibration, Bioinformatik (beräkningsbiologi), Medicine, Biological Assay, chemical toxicity data, 106005 Bioinformatics, 301207 Pharmaceutical chemistry
Popis: Machine learning models are widely applied to predict molecular properties or the biological activity of small molecules on a specific protein. Models can be integrated in a conformal prediction (CP) framework which adds a calibration step to estimate the confidence of the predictions. CP models present the advantage of ensuring a predefined error rate under the assumption that test and calibration set are exchangeable. In cases where the test data have drifted away from the descriptor space of the training data, or where assay setups have changed, this assumption might not be fulfilled and the models are not guaranteed to be valid. In this study, the performance of internally valid CP models when applied to either newer time-split data or to external data was evaluated. In detail, temporal data drifts were analysed based on twelve datasets from the ChEMBL database. In addition, discrepancies between models trained on publicly available data and applied to proprietary data for the liver toxicity and MNT in vivo endpoints were investigated. In most cases, a drastic decrease in the validity of the models was observed when applied to the time-split or external (holdout) test sets. To overcome the decrease in model validity, a strategy for updating the calibration set with data more similar to the holdout set was investigated. Updating the calibration set generally improved the validity, restoring it completely to its expected value in many cases. The restored validity is the first requisite for applying the CP models with confidence. However, the increased validity comes at the cost of a decrease in model efficiency, as more predictions are identified as inconclusive. This study presents a strategy to recalibrate CP models to mitigate the effects of data drifts. Updating the calibration sets without having to retrain the model has proven to be a useful approach to restore the validity of most models.
Druh dokumentu: Article
Conference object
Other literature type
Popis souboru: application/pdf
ISSN: 2045-2322
DOI: 10.21203/rs.3.rs-945085/v2
DOI: 10.21203/rs.3.rs-945085/v1
DOI: 10.1038/s41598-022-09309-3
DOI: 10.17169/refubium-42357
Přístupová URL adresa: https://www.researchsquare.com/article/rs-945085/latest.pdf
https://www.researchsquare.com/article/rs-945085/v1.pdf?c=1634247131000
https://pubmed.ncbi.nlm.nih.gov/35508546
https://doaj.org/article/9640412f156b4083ba647cd502b86051
https://ucrisportal.univie.ac.at/de/publications/8915bd71-c2b5-4c5f-87a9-f3e77d163171
https://doi.org/10.1038/s41598-022-09309-3
https://www.researchsquare.com/article/rs-945085/v1
https://phaidra.univie.ac.at/o:1666670
https://hdl.handle.net/11353/10.1666670
https://doi.org/10.1038/s41598-022-09309-3
https://refubium.fu-berlin.de/handle/fub188/42633
https://doi.org/10.17169/refubium-42357
https://doi.org/10.1038/s41598-022-09309-3
https://discovery-pp.ucl.ac.uk/id/eprint/10149273/
http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-475196
Rights: CC BY
URL: http://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (http://creativecommons.org/licenses/by/4.0/) .
Přístupové číslo: edsair.doi.dedup.....524a9ca6b36c2bccecf81bdaf2319217
Databáze: OpenAIRE
Popis
Abstrakt:Machine learning models are widely applied to predict molecular properties or the biological activity of small molecules on a specific protein. Models can be integrated in a conformal prediction (CP) framework which adds a calibration step to estimate the confidence of the predictions. CP models present the advantage of ensuring a predefined error rate under the assumption that test and calibration set are exchangeable. In cases where the test data have drifted away from the descriptor space of the training data, or where assay setups have changed, this assumption might not be fulfilled and the models are not guaranteed to be valid. In this study, the performance of internally valid CP models when applied to either newer time-split data or to external data was evaluated. In detail, temporal data drifts were analysed based on twelve datasets from the ChEMBL database. In addition, discrepancies between models trained on publicly available data and applied to proprietary data for the liver toxicity and MNT in vivo endpoints were investigated. In most cases, a drastic decrease in the validity of the models was observed when applied to the time-split or external (holdout) test sets. To overcome the decrease in model validity, a strategy for updating the calibration set with data more similar to the holdout set was investigated. Updating the calibration set generally improved the validity, restoring it completely to its expected value in many cases. The restored validity is the first requisite for applying the CP models with confidence. However, the increased validity comes at the cost of a decrease in model efficiency, as more predictions are identified as inconclusive. This study presents a strategy to recalibrate CP models to mitigate the effects of data drifts. Updating the calibration sets without having to retrain the model has proven to be a useful approach to restore the validity of most models.
ISSN:20452322
DOI:10.21203/rs.3.rs-945085/v2