ROI-Based Multimodal Neuroimaging Feature Fusion Method and Its Graph Neural Network Diagnostic Model

Single-modality neuroimaging data often provide limited information and are constrained by technical issues such as signal-to-noise ratio, and resolution limitations, potentially leading to biases and an incomplete understanding of brain complexities. This can hinder the development of diagnostic an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access Jg. 13; S. 26915 - 26926
Hauptverfasser: Wang, Xuan, Yang, Xiaopeng, Zhang, Xiaotong, Chen, Yang
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2169-3536, 2169-3536
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Single-modality neuroimaging data often provide limited information and are constrained by technical issues such as signal-to-noise ratio, and resolution limitations, potentially leading to biases and an incomplete understanding of brain complexities. This can hinder the development of diagnostic and therapeutic strategies for brain disorders. To address these challenges, this paper presents the Multimodal Graph Neural Network Model based on Feature Fusion (MMP-DGNN), which leverages sMRI and PET data. The model employs an algorithm to extract and accurately describe sample features using an autoencoder. During feature fusion, a shared adjacency matrix based on feature similarity and phenotypic data is constructed for graph representation. A dual-layer graph neural network then classifies the features, with the results fused at the decision layer for final classification. Experimental results show that MMP-DGNN achieves superior classification performance of 98.17%, outperforming other methods in multimodal neuroimaging data classification.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3435433