Balancing privacy and performance in healthcare: A federated learning framework for sensitive data
Saved in:
| Title: | Balancing privacy and performance in healthcare: A federated learning framework for sensitive data |
|---|---|
| Authors: | Fatima Tanveer, Faisal Iradat, Waseem Iqbal, Hatoon S Alsagri, Haya Abdullah A Alhakbani, Awais Ahmad, Fakhri Alam Khan |
| Source: | DIGITAL HEALTH. 11 |
| Publisher Information: | SAGE Publications, 2025. |
| Publication Year: | 2025 |
| Description: | Objective To design and evaluate a privacy-preserving federated learning (PPFL) framework for sensitive healthcare data, balancing robust privacy, model performance, and computational efficiency, while promoting user trust. Methods We integrated differentially private stochastic gradient descent (DPSGD) into a federated learning (FL) pipeline and evaluated the system on the Stroke Prediction Dataset. Experiments measured model utility (accuracy, F1), privacy ( ε ), resource usage, and trust features, with results compared to recent baselines. Results The proposed framework achieved 93% accuracy on stroke risk prediction while maintaining a final privacy budget of ε 0.69 and minimal computational overhead. Our approach outperformed existing methods in privacy-utility trade-off, provided real-time privacy feedback, and is compliant with TRIPOD-AI/CLAIM recommendations. Conclusion This PPFL framework enables effective, trustworthy privacy-preserving ML in healthcare and resource-constrained settings. Future work will extend model architectures, regulatory alignment, and direct user trust assessment. |
| Document Type: | Article |
| Language: | English |
| ISSN: | 2055-2076 |
| DOI: | 10.1177/20552076251381769 |
| Rights: | CC BY NC |
| Accession Number: | edsair.doi...........4b3639e9ea63ba8d34393ed7e7f33b87 |
| Database: | OpenAIRE |
| Abstract: | Objective To design and evaluate a privacy-preserving federated learning (PPFL) framework for sensitive healthcare data, balancing robust privacy, model performance, and computational efficiency, while promoting user trust. Methods We integrated differentially private stochastic gradient descent (DPSGD) into a federated learning (FL) pipeline and evaluated the system on the Stroke Prediction Dataset. Experiments measured model utility (accuracy, F1), privacy ( ε ), resource usage, and trust features, with results compared to recent baselines. Results The proposed framework achieved 93% accuracy on stroke risk prediction while maintaining a final privacy budget of ε 0.69 and minimal computational overhead. Our approach outperformed existing methods in privacy-utility trade-off, provided real-time privacy feedback, and is compliant with TRIPOD-AI/CLAIM recommendations. Conclusion This PPFL framework enables effective, trustworthy privacy-preserving ML in healthcare and resource-constrained settings. Future work will extend model architectures, regulatory alignment, and direct user trust assessment. |
|---|---|
| ISSN: | 20552076 |
| DOI: | 10.1177/20552076251381769 |
Full Text Finder
Nájsť tento článok vo Web of Science