Radiological Image and Text-Based Medical Concept Detection in Social Networks Using Hybrid Deep Learning.
Gespeichert in:
| Titel: | Radiological Image and Text-Based Medical Concept Detection in Social Networks Using Hybrid Deep Learning. |
|---|---|
| Autoren: | Bayrakdar S; Computer Engineering Department, Duzce University, Duzce, Turkey. sumeyyebayrakdar@duzce.edu.tr., Yucedag I; Computer Engineering Department, Duzce University, Duzce, Turkey. |
| Quelle: | Journal of medical systems [J Med Syst] 2025 Dec 05; Vol. 49 (1), pp. 178. Date of Electronic Publication: 2025 Dec 05. |
| Publikationsart: | Journal Article |
| Sprache: | English |
| Info zur Zeitschrift: | Publisher: Kluwer Academic/Plenum Publishers Country of Publication: United States NLM ID: 7806056 Publication Model: Electronic Cited Medium: Internet ISSN: 1573-689X (Electronic) Linking ISSN: 01485598 NLM ISO Abbreviation: J Med Syst Subsets: MEDLINE |
| Imprint Name(s): | Publication: 1999- : New York, NY : Kluwer Academic/Plenum Publishers Original Publication: New York, Plenum Press. |
| MeSH-Schlagworte: | Deep Learning* , Social Networking*, Humans ; Neural Networks, Computer ; Unified Medical Language System ; Social Media |
| Abstract: | Nowadays, the presence of health-related content on social networks is rapidly increasing. With the effect of these networks, a large number of medical images, diagnosed and interpreted by various experts, are shared online. Therefore, concept detection and image classification from medical images remains a challenging task. In recent years, deep learning-based models have become increasingly popular for addressing these challenges. The primary objective of this study is to perform multi-label classification of radiological images shared on a social network by automatically assigning relevant medical concepts. These concepts are derived from the Unified Medical Language System (UMLS). In this study, Convolutional Neural Network (CNN) combined with feed forward neural networks and various image encoders, including VGG-19, DenseNet-121, ResNet-101, Xception, Efficient-B7, to predict the appropriate concepts. The proposed hybrid deep learning models were trained and evaluated using the ImageCLEF 2019 dataset. Further evaluation was performed using a custom dataset (Rdpd_Test_Ds) composed of radiological images and their associated comments collected from a social network. The performance of the models was assessed using precision, recall, and F1-score metrics. The evaluation results are promising, demonstrating high performance. To the best of our knowledge, this research is the first to apply deep learning-based models to radiological data collected from a social network, representing a novel and impactful contribution to the field. (© 2025. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.) |
| References: | N. Jokar, A. R. Honarvar, K. Esfandiari, and S. Aghamirzadeh, “The review of social networks analysis tools,” Bull. la Société R. des Sci. Liège, vol. 85, pp. 329–339, 2016, Accessed: Mar. 14, 2019. [Online]. Available: https://popups.uliege.be/0037-9565/index.php?id=5380&file=1. J. L. del Cura Rodríguez, “Social networks in radiology: Toward a new paradigm in medical education?,” Radiologia, vol. 66, no. 1, pp. 70–77, Jan. 2024, doi: https://doi.org/10.1016/J.RXENG.2023.01.011 . K. K and S. Kamath S, “Deep neural models for automated multi-task diagnostic scan management—quality enhancement, view classification and report generation,” Biomed. Phys. Eng. Express, vol. 8, no. 1, p. 015011, Nov. 2021, doi: https://doi.org/10.1088/2057-1976/AC3ADD . (PMID: 10.1088/2057-1976/AC3ADD) I. H. Sarker, “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions,” SN Comput. Sci. 2021 26, vol. 2, no. 6, pp. 1–20, Aug. 2021, doi: https://doi.org/10.1007/S42979-021-00815-1 . M. E. Bayrakdar, “Priority based health data monitoring with IEEE 802.11af technology in wireless medical sensor networks,” Med. Biol. Eng. Comput., vol. 57, no. 12, pp. 2757–2769, Dec. 2019, doi: https://doi.org/10.1007/S11517-019-02060-4 . T. Talaei Khoei, H. Ould Slimane, and N. Kaabouch, “Deep learning: systematic review, models, challenges, and research directions,” Neural Comput. Appl., vol. 35, no. 31, pp. 23103–23124, Nov. 2023, doi: https://doi.org/10.1007/S00521-023-08957-4/TABLES/8 . F. Altaf, S. M. S. Islam, N. Akhtar, and N. K. Janjua, “Going deep in medical image analysis: Concepts, methods, challenges, and future directions,” IEEE Access, vol. 7, pp. 99540–99572, 2019, doi: https://doi.org/10.1109/ACCESS.2019.2929365 . (PMID: 10.1109/ACCESS.2019.2929365) D. Agarwal, M. Á. Berbís, A. Luna, V. Lipari, J. B. Ballester, and I. de la Torre-Díez, “Automated Medical Diagnosis of Alzheimer´s Disease Using an Efficient Net Convolutional Neural Network,” J. Med. Syst., vol. 47, no. 1, pp. 1–22, Dec. 2023, doi: https://doi.org/10.1007/S10916-023-01941-4/TABLES/4 . K. Karthik and S. Sowmya Kamath, “MSDNet: a deep neural ensemble model for abnormality detection and classification of plain radiograp[1] K. Karthik and S. Sowmya Kamath, ‘MSDNet: a deep neural ensemble model for abnormality detection and classification of plain radiographs,’ J. Ambient I,” J. Ambient Intell. Humaniz. Comput., vol. 14, no. 12, pp. 16099–16113, Dec. 2023, doi: https://doi.org/10.1007/S12652-022-03835-8/FIGURES/7 . G. O. Gajbhiye, A. V. Nandedkar, and I. Faye, “Translating medical image to radiological report: Adaptive multilevel multi-attention approach,” Comput. Methods Programs Biomed., vol. 221, p. 106853, Jun. 2022, doi: https://doi.org/10.1016/J.CMPB.2022.106853 . (PMID: 10.1016/J.CMPB.2022.10685335561439) H. Wei, Y. Yang, S. Sun, M. Feng, R. Wang, and X. Han, “LMTTM-VMI: Linked Memory Token Turing Machine for 3D volumetric medical image classification,” Comput. Methods Programs Biomed., vol. 262, p. 108640, Apr. 2025, doi: https://doi.org/10.1016/J.CMPB.2025.108640 . (PMID: 10.1016/J.CMPB.2025.10864039951959) A. Mondai, E. Cambria, A. Feraco, Di. Das, and S. Bandyopadhyay, “Auto-categorization of medical concepts and contexts,” 2017 IEEE Symp. Ser. Comput. Intell. SSCI 2017 - Proc., vol. 2018-January, pp. 1–7, Feb. 2018, doi: https://doi.org/10.1109/SSCI.2017.8285253 . O. Bodenreider, “The Unified Medical Language System (UMLS): integrating biomedical terminology,” Nucleic Acids Res., vol. 32, no. Database issue, p. D267, Jan. 2004, doi: https://doi.org/10.1093/NAR/GKH061 . B. Ionescu et al., “ImageCLEF 2019: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11696 LNCS, pp. 358–386, 2019, Accessed: Apr. 16, 2022. [Online]. Available: https://link.springer.com/chapter/ https://doi.org/10.1007/978-3-030-28577-7_28. O. Pelka, C. M. Friedrich, A. G. Seco De Herrera, and H. Müller, “Overview of the ImagecleFmed 2019 concept detection task,” CEUR Workshop Proc., vol. 2380, 2019, Accessed: Apr. 16, 2022. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/[last. W. Xu, Y. L. Fu, and D. Zhu, “ResNet and its application to medical image processing: Research progress and challenges,” Comput. Methods Programs Biomed., vol. 240, p. 107660, Oct. 2023, doi: https://doi.org/10.1016/J.CMPB.2023.107660 . (PMID: 10.1016/J.CMPB.2023.10766037320940) S. Takahashi et al., “Comparison of Vision Transformers and Convolutional Neural Networks in Medical Image Analysis: A Systematic Review,” J. Med. Syst., vol. 48, no. 1, pp. 1–22, Dec. 2024, doi: https://doi.org/10.1007/S10916-024-02105-8/METRICS . (PMID: 10.1007/S10916-024-02105-8/METRICS) P. Clough, M. Sanderson, and N. Reid, “The Eurovision St Andrews collection of photographs,” Association for Computing Machinery (ACM), UK, 2003. doi: https://doi.org/10.1145/1147197.1147199 . (PMID: 10.1145/1147197.1147199) “Keras: the Python deep learning API.” https://keras.io / (accessed Jan. 17, 2023). H. Qassim, A. Verma, and D. Feinzimer, “Compressed residual-VGG16 CNN model for big data places image recognition,” 2018 IEEE 8th Annu. Comput. Commun. Work. Conf. CCWC 2018, vol. 2018-January, pp. 169–175, Feb. 2018, doi: https://doi.org/10.1109/CCWC.2018.8301729 . O. Pelka, S. Koitka, J. Rückert, F. Nensa, and C. M. Friedrich, “Radiology objects in COntext (ROCO): A multimodal image dataset,” Lect. Notes Comput. Sci., vol. 11043, pp. 180–189, 2018, doi: https://doi.org/10.1007/978-3-030-01364-6_20/COVER . (PMID: 10.1007/978-3-030-01364-6_20/COVER) V. Kougia, J. Pavlopoulos, and I. Androutsopoulos, “AUEB NLP Group at ImageCLEFmed Caption 2019,” in In: CLEF2019 Working Notes. CEUR Workshop Proceed-ings, (CEUR-WS.org), Lugano,Switzerland, 2019. National Library of Medicine, “UMLS Metathesaurus Browser.” https://uts.nlm.nih.gov/uts/umls/home (accessed Dec. 15, 2022). J. Xu et al., “Concept detection based on multi-label classification and image captioning approach-DAMO at ImageCLEF 2019,” CLEF2019 Work. Notes. CEUR Work. Proceed-ings, 2019. B. Jing, P. Xie, and E. P. Xing, “On the Automatic Generation of Medical Imaging Reports,” Proc. 56th Annu. Meet. Assoc. Comput. Linguist., vol. 1, pp. 2577–2586, 2018, doi: https://doi.org/10.18653/V1/P18-1240 . “TensorFlow v2.11.0,Keras,VGG19.” https://www.tensorflow.org/api_docs/python/tf/keras/applications/vgg19/VGG19 (accessed Dec. 20, 2022). D. P. Kingma and J. L. Ba, “Adam: A Method for Stochastic Optimization,” in International Conference on Learning Representations (ICLR), San Diego, California: International Conference on Learning Representations, ICLR, Dec. 2015. doi: https://doi.org/10.48550/arxiv.1412.6980 . M. Abdallah, N. An Le Khac, H. Jahromi, and A. Delia Jurcut, “A Hybrid CNN-LSTM Based Approach for Anomaly Detection Systems in SDNs,” in The 16th International Conference on Availability, Reliability and Security, Vienna, Austria: Association for Computing Machinery, Aug. 2021. doi: https://doi.org/10.1145/3465481.3469190 . H. Almutairi, G. M. Hassan, and A. Datta, “Detection of obstructive sleep apnoea by ECG signals using deep learning architectures,” in 28th European Signal Processing Conference (EUSIPCO), Amsterdam, Netherlands: European Signal Processing Conference, EUSIPCO, Jan. 2021, pp. 1382–1386. doi: https://doi.org/10.23919/EUSIPCO47968.2020.9287360 . S. L. Oh, E. Y. K. Ng, R. S. Tan, and U. R. Acharya, “Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats,” Comput. Biol. Med., vol. 102, pp. 278–287, Nov. 2018, doi: https://doi.org/10.1016/J.COMPBIOMED.2018.06.002 . M. Li, T. Zhang, Y. Chen, and A. J. Smola, “Efficient mini-batch training for stochastic optimization,” Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., vol. New York, USA, pp. 661–670, 2014, doi: https://doi.org/10.1145/2623330.2623612 . G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in EEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 2261–2269. Accessed: Dec. 24, 2022. [Online]. Available: https://github.com/liuzhuang13/DenseNet . N. Hasan, Y. Bao, A. Shawon, and Y. Huang, “DenseNet Convolutional Neural Networks Application for Predicting COVID-19 Using CT Image,” SN Comput. Sci., vol. 2, no. 5, pp. 1–11, Sep. 2021, doi: https://doi.org/10.1007/S42979-021-00782-7/TABLES/4 . (PMID: 10.1007/S42979-021-00782-7/TABLES/4) A. Aljuaid and M. Anwar, “Survey of Supervised Learning for Medical Image Processing,” SN Comput. Sci., vol. 3, no. 4, pp. 1–22, Jul. 2022, doi: https://doi.org/10.1007/S42979-022-01166-1/FIGURES/20 . (PMID: 10.1007/S42979-022-01166-1/FIGURES/20) F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” Proc. – 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 1800–1807, Oct. 2016, doi: https://doi.org/10.48550/arxiv.1610.02357 . S. Roopashree and J. Anitha, “DeepHerb: A Vision Based System for Medicinal Plants Using Xception Features,” IEEE Access, vol. 9, pp. 135927–135941, 2021, doi: https://doi.org/10.1109/ACCESS.2021.3116207 . (PMID: 10.1109/ACCESS.2021.3116207) M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” 36th Int. Conf. Mach. Learn., pp. 10691–10700, May 2019, doi: https://doi.org/10.48550/arxiv.1905.11946 . Ü. Atila, M. Uçar, K. Akyol, and E. Uçar, “Plant leaf disease classification using EfficientNet deep learning model,” Ecol. Inform., vol. 61, no. 101182, Mar. 2021, doi: https://doi.org/10.1016/J.ECOINF.2020.101182 . T. B. T. Nguyen, M. V. Ngo, and V. P. Nguyen, “Histopathological Imaging Classification of Breast Tissue for Cancer Diagnosis Support Using Deep Learning Models,” Lect. Notes Inst. Comput. Sci. Soc. Telecommun. Eng. LNICST, vol. 444, pp. 152–164, 2022, doi: https://doi.org/10.1007/978-3-031-08878-0_11/COVER . (PMID: 10.1007/978-3-031-08878-0_11/COVER) S. Godbole and S. Sarawagi, “Discriminative methods for multi-labeled classification,” Lect. Notes Comput. Sci., vol. 3056, pp. 22–30, 2004, doi: https://doi.org/10.1007/978-3-540-24775-3_5/COVER . (PMID: 10.1007/978-3-540-24775-3_5/COVER) D. Miranda, V. Thenkanidiyoor, and D. A. Dinesh, “Review on approaches to concept detection in medical images,” Biocybern. Biomed. Eng., vol. 42, no. 2, pp. 453–462, Apr. 2022, doi: https://doi.org/10.1016/J.BBE.2022.02.012 . (PMID: 10.1016/J.BBE.2022.02.012) “(1) Radiopaedia.org | Facebook.” https://www.facebook.com/Radiopaedia.org (accessed Jan. 01, 2023). “(1) Radiography - Radiopaedia.org | Facebook.” https://www.facebook.com/RadRadiopaedia (accessed Jan. 01, 2023). “Radiopaedia.org, the peer-reviewed collaborative radiology resource.” https://radiopaedia.org / (accessed Aug. 05, 2025). |
| Contributed Indexing: | Keywords: Deep learning; Medical concept detection; Radiological image and text; Social networks |
| Entry Date(s): | Date Created: 20251205 Date Completed: 20251205 Latest Revision: 20251205 |
| Update Code: | 20251205 |
| DOI: | 10.1007/s10916-025-02311-y |
| PMID: | 41348245 |
| Datenbank: | MEDLINE |
| Abstract: | Nowadays, the presence of health-related content on social networks is rapidly increasing. With the effect of these networks, a large number of medical images, diagnosed and interpreted by various experts, are shared online. Therefore, concept detection and image classification from medical images remains a challenging task. In recent years, deep learning-based models have become increasingly popular for addressing these challenges. The primary objective of this study is to perform multi-label classification of radiological images shared on a social network by automatically assigning relevant medical concepts. These concepts are derived from the Unified Medical Language System (UMLS). In this study, Convolutional Neural Network (CNN) combined with feed forward neural networks and various image encoders, including VGG-19, DenseNet-121, ResNet-101, Xception, Efficient-B7, to predict the appropriate concepts. The proposed hybrid deep learning models were trained and evaluated using the ImageCLEF 2019 dataset. Further evaluation was performed using a custom dataset (Rdpd_Test_Ds) composed of radiological images and their associated comments collected from a social network. The performance of the models was assessed using precision, recall, and F1-score metrics. The evaluation results are promising, demonstrating high performance. To the best of our knowledge, this research is the first to apply deep learning-based models to radiological data collected from a social network, representing a novel and impactful contribution to the field.<br /> (© 2025. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.) |
|---|---|
| ISSN: | 1573-689X |
| DOI: | 10.1007/s10916-025-02311-y |
Full Text Finder
Nájsť tento článok vo Web of Science