Balancing privacy and performance in healthcare: A federated learning framework for sensitive data
Uloženo v:
| Název: | Balancing privacy and performance in healthcare: A federated learning framework for sensitive data |
|---|---|
| Autoři: | Fatima Tanveer, Faisal Iradat, Waseem Iqbal, Hatoon S Alsagri, Haya Abdullah A Alhakbani, Awais Ahmad, Fakhri Alam Khan |
| Zdroj: | DIGITAL HEALTH. 11 |
| Informace o vydavateli: | SAGE Publications, 2025. |
| Rok vydání: | 2025 |
| Popis: | Objective To design and evaluate a privacy-preserving federated learning (PPFL) framework for sensitive healthcare data, balancing robust privacy, model performance, and computational efficiency, while promoting user trust. Methods We integrated differentially private stochastic gradient descent (DPSGD) into a federated learning (FL) pipeline and evaluated the system on the Stroke Prediction Dataset. Experiments measured model utility (accuracy, F1), privacy ( ε ), resource usage, and trust features, with results compared to recent baselines. Results The proposed framework achieved 93% accuracy on stroke risk prediction while maintaining a final privacy budget of ε 0.69 and minimal computational overhead. Our approach outperformed existing methods in privacy-utility trade-off, provided real-time privacy feedback, and is compliant with TRIPOD-AI/CLAIM recommendations. Conclusion This PPFL framework enables effective, trustworthy privacy-preserving ML in healthcare and resource-constrained settings. Future work will extend model architectures, regulatory alignment, and direct user trust assessment. |
| Druh dokumentu: | Article |
| Jazyk: | English |
| ISSN: | 2055-2076 |
| DOI: | 10.1177/20552076251381769 |
| Rights: | CC BY NC |
| Přístupové číslo: | edsair.doi...........4b3639e9ea63ba8d34393ed7e7f33b87 |
| Databáze: | OpenAIRE |
| Abstrakt: | Objective To design and evaluate a privacy-preserving federated learning (PPFL) framework for sensitive healthcare data, balancing robust privacy, model performance, and computational efficiency, while promoting user trust. Methods We integrated differentially private stochastic gradient descent (DPSGD) into a federated learning (FL) pipeline and evaluated the system on the Stroke Prediction Dataset. Experiments measured model utility (accuracy, F1), privacy ( ε ), resource usage, and trust features, with results compared to recent baselines. Results The proposed framework achieved 93% accuracy on stroke risk prediction while maintaining a final privacy budget of ε 0.69 and minimal computational overhead. Our approach outperformed existing methods in privacy-utility trade-off, provided real-time privacy feedback, and is compliant with TRIPOD-AI/CLAIM recommendations. Conclusion This PPFL framework enables effective, trustworthy privacy-preserving ML in healthcare and resource-constrained settings. Future work will extend model architectures, regulatory alignment, and direct user trust assessment. |
|---|---|
| ISSN: | 20552076 |
| DOI: | 10.1177/20552076251381769 |
Full Text Finder
Nájsť tento článok vo Web of Science