Deep Neural Network Initialization Methods for Micro-Doppler Classification With Low Training Sample Support

Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter,...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE geoscience and remote sensing letters Ročník 14; číslo 12; s. 2462 - 2466
Hlavní autori: Seyfioglu, Mehmet Saygin, Gurbuz, Sevgi Zubeyde
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 01.12.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1545-598X, 1558-0571
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter, the efficacy of two neural network initialization techniques-unsupervised pretraining and transfer learning-for dealing with training DNNs on small data sets is compared. Unsupervised pretraining is implemented through the design of a convolutional autoencoder (CAE), while transfer learning from two popular convolutional neural network architectures (VGGNet and GoogleNet) is used to augment measured RF data for training. A 12-class problem for discrimination of micro-Doppler signatures for indoor human activities is utilized to analyze activation maps, bottleneck features, class model, and classification accuracy with respect to training sample size. Results show that on meager data sets, transfer learning outperforms unsupervised pretraining and random initialization by 10% and 25%, respectively, but that when the sample size exceeds 650, unsupervised pretraining surpasses transfer learning and random initialization by 5% and 10%, respectively. Visualization of activation layers and learned models reveals how the CAE succeeds in representing the micro-Doppler signature.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2017.2771405