Deep Neural Network Initialization Methods for Micro-Doppler Classification With Low Training Sample Support

Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters Jg. 14; H. 12; S. 2462 - 2466
Hauptverfasser: Seyfioglu, Mehmet Saygin, Gurbuz, Sevgi Zubeyde
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 01.12.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1545-598X, 1558-0571
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter, the efficacy of two neural network initialization techniques-unsupervised pretraining and transfer learning-for dealing with training DNNs on small data sets is compared. Unsupervised pretraining is implemented through the design of a convolutional autoencoder (CAE), while transfer learning from two popular convolutional neural network architectures (VGGNet and GoogleNet) is used to augment measured RF data for training. A 12-class problem for discrimination of micro-Doppler signatures for indoor human activities is utilized to analyze activation maps, bottleneck features, class model, and classification accuracy with respect to training sample size. Results show that on meager data sets, transfer learning outperforms unsupervised pretraining and random initialization by 10% and 25%, respectively, but that when the sample size exceeds 650, unsupervised pretraining surpasses transfer learning and random initialization by 5% and 10%, respectively. Visualization of activation layers and learned models reveals how the CAE succeeds in representing the micro-Doppler signature.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2017.2771405