Integrated Artificial Intelligence Framework for Tuberculosis Treatment Abandonment Prediction: A Multi-Paradigm Approach.

Saved in:
Bibliographic Details
Title: Integrated Artificial Intelligence Framework for Tuberculosis Treatment Abandonment Prediction: A Multi-Paradigm Approach.
Authors: Filho, Frederico Guilherme Santana Da Silva, Falcão, Igor Wenner Silva, de Souza, Tobias Moraes, Carneiro, Saul Rassy, da Rocha Seruffo, Marcos César, Cardoso, Diego Lisboa
Source: Journal of Clinical Medicine; Dec2025, Vol. 14 Issue 24, p8646, 36p
Subject Terms: TUBERCULOSIS, ARTIFICIAL intelligence, PREDICTION models, MACHINE learning, REINFORCEMENT learning, NATURAL language processing, PATIENT dropouts
Geographic Terms: SAO Paulo (Brazil : State)
Abstract: Background/Objectives: Treatment adherence challenges affect 10–20% of tuberculosis patients globally, contributing to drug resistance and continued transmission. While artificial intelligence approaches show promise for identifying patients who may benefit from additional treatment support, most models lack the interpretability necessary for clinical implementation. We aimed to develop and validate an integrated artificial intelligence framework combining traditional machine learning (interpretable algorithms like logistic regression and decision trees), explainable AI (methods showing which patient characteristics influence predictions), deep reinforcement learning (algorithms learning optimal intervention strategies), and natural language processing (clinical text analysis) to identify tuberculosis patients who would benefit from enhanced treatment support services. Methods: We analyzed 103,846 pulmonary tuberculosis cases from São Paulo state surveillance data (2006–2016). We evaluated models using precision (accuracy of positive predictions), recall (ability to identify all patients requiring support), F1-score (balanced performance measure), and AUC-ROC (overall discrimination ability) while maintaining interpretability scores above 0.90 for clinical transparency. Results: Our integrated framework demonstrated that explainable AI matched traditional machine learning performance (both F1-score: 0.77) while maintaining maximum interpretability (score: 0.95). The combined ensemble delivered superior results (F1-score: 0.82, 95% CI: 0.79–0.85), representing a 6.5% improvement over individual approaches (p < 0.001). Key predictors included substance use disorders, HIV co-infection, and treatment supervision factors rather than demographic characteristics. Conclusions: This multi-paradigm AI system provides a methodologically sound foundation for identifying tuberculosis patients who would benefit from enhanced treatment support services. The approach delivers excellent predictive accuracy while preserving full clinical transparency, demonstrating that the accuracy–interpretability trade-off in medical AI can be resolved through the systematic integration of complementary methodologies. [ABSTRACT FROM AUTHOR]
Copyright of Journal of Clinical Medicine is the property of MDPI and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Biomedical Index
Description
Abstract:Background/Objectives: Treatment adherence challenges affect 10–20% of tuberculosis patients globally, contributing to drug resistance and continued transmission. While artificial intelligence approaches show promise for identifying patients who may benefit from additional treatment support, most models lack the interpretability necessary for clinical implementation. We aimed to develop and validate an integrated artificial intelligence framework combining traditional machine learning (interpretable algorithms like logistic regression and decision trees), explainable AI (methods showing which patient characteristics influence predictions), deep reinforcement learning (algorithms learning optimal intervention strategies), and natural language processing (clinical text analysis) to identify tuberculosis patients who would benefit from enhanced treatment support services. Methods: We analyzed 103,846 pulmonary tuberculosis cases from São Paulo state surveillance data (2006–2016). We evaluated models using precision (accuracy of positive predictions), recall (ability to identify all patients requiring support), F1-score (balanced performance measure), and AUC-ROC (overall discrimination ability) while maintaining interpretability scores above 0.90 for clinical transparency. Results: Our integrated framework demonstrated that explainable AI matched traditional machine learning performance (both F1-score: 0.77) while maintaining maximum interpretability (score: 0.95). The combined ensemble delivered superior results (F1-score: 0.82, 95% CI: 0.79–0.85), representing a 6.5% improvement over individual approaches (p < 0.001). Key predictors included substance use disorders, HIV co-infection, and treatment supervision factors rather than demographic characteristics. Conclusions: This multi-paradigm AI system provides a methodologically sound foundation for identifying tuberculosis patients who would benefit from enhanced treatment support services. The approach delivers excellent predictive accuracy while preserving full clinical transparency, demonstrating that the accuracy–interpretability trade-off in medical AI can be resolved through the systematic integration of complementary methodologies. [ABSTRACT FROM AUTHOR]
ISSN:20770383
DOI:10.3390/jcm14248646