Multimodal learning using convolution neural network and Sparse Autoencoder

In the last decade, pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer's disease (AD) have been the subject of extensive research. Deep learning has recently been a great interest in AD classification. Previous works had done almost on single modality dataset, su...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International Conference on Big Data and Smart Computing S. 309 - 312
Hauptverfasser: Vu, Tien Duong, Yang, Hyung-Jeong, Nguyen, Van Quan, Oh, A-Ran, Kim, Mi-Sun
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.02.2017
Schlagworte:
ISSN:2375-9356
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the last decade, pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer's disease (AD) have been the subject of extensive research. Deep learning has recently been a great interest in AD classification. Previous works had done almost on single modality dataset, such as Magnetic Resonance Imaging (MRI) or Positron Emission Tomography (PET), shown high performances. However, identifying the distinctions between Alzheimer's brain data and healthy brain data in older adults (age > 75) is challenging due to highly similar brain patterns and image intensities. The corporation of multimodalities can solve this issue since it discovers and uses the further complementary of hidden biomarkers from other modalities instead of only one, which itself cannot provide. We therefore propose a deep learning method on fusion multimodalities. In details, our approach includes Sparse Autoencoder (SAE) and convolution neural network (CNN) train and test on combined PET-MRI data to diagnose the disease status of a patient. We focus on advantages of multimodalities to help providing complementary information than only one, lead to improve classification accuracy. We conducted experiments in a dataset of 1272 scans from ADNI study, the proposed method can achieve a classification accuracy of 90% between AD patients and healthy controls, demonstrate the improvement than using only one modality.
ISSN:2375-9356
DOI:10.1109/BIGCOMP.2017.7881683