An Attentive Dual-Encoder Framework Leveraging Multimodal Visual and Semantic Information for Automatic OSAHS Diagnosis

Obstructive sleep apnea-hypopnea syndrome (OS-AHS) is a common sleep disorder caused by upper airway blockage, leading to oxygen deprivation and disrupted sleep. Traditional diagnosis using polysomnography (PSG) is expensive, time-consuming, and uncomfortable. Existing deep learning methods using fa...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) s. 1 - 5
Hlavní autoři: Wei, Yingchen, Qiu, Xihe, Tan, Xiaoyu, Huang, Jingjing, Chu, Wei, Xu, Yinghui, Qi, Yuan
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 06.04.2025
Témata:
ISSN:2379-190X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Obstructive sleep apnea-hypopnea syndrome (OS-AHS) is a common sleep disorder caused by upper airway blockage, leading to oxygen deprivation and disrupted sleep. Traditional diagnosis using polysomnography (PSG) is expensive, time-consuming, and uncomfortable. Existing deep learning methods using facial image analysis lack accuracy due to poor facial feature capture and limited sample sizes. To address this, we propose a multimodal dual encoder model that integrates visual and language inputs for automated OSAHS diagnosis. The model balances data using randomOverSampler (ROS), extracts key facial features with attention grids, and converts basic physiological data into meaningful text. Cross attention combines image and text data for better feature extraction, and ordered regression loss ensures stable learning. Our approach improves diagnostic efficiency and accuracy, achieving 91.3% top-1 accuracy in a four class severity classification task, demonstrating state of the art performance. Code is available at https://github.com/luboyan6/VTA-OSAHS.
ISSN:2379-190X
DOI:10.1109/ICASSP49660.2025.10888243