Deep Neural Network Initialization Methods for Micro-Doppler Classification With Low Training Sample Support

Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter,...

Full description

Saved in:
Bibliographic Details
Published in:IEEE geoscience and remote sensing letters Vol. 14; no. 12; pp. 2462 - 2466
Main Authors: Seyfioglu, Mehmet Saygin, Gurbuz, Sevgi Zubeyde
Format: Journal Article
Language:English
Published: Piscataway IEEE 01.12.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1545-598X, 1558-0571
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter, the efficacy of two neural network initialization techniques-unsupervised pretraining and transfer learning-for dealing with training DNNs on small data sets is compared. Unsupervised pretraining is implemented through the design of a convolutional autoencoder (CAE), while transfer learning from two popular convolutional neural network architectures (VGGNet and GoogleNet) is used to augment measured RF data for training. A 12-class problem for discrimination of micro-Doppler signatures for indoor human activities is utilized to analyze activation maps, bottleneck features, class model, and classification accuracy with respect to training sample size. Results show that on meager data sets, transfer learning outperforms unsupervised pretraining and random initialization by 10% and 25%, respectively, but that when the sample size exceeds 650, unsupervised pretraining surpasses transfer learning and random initialization by 5% and 10%, respectively. Visualization of activation layers and learned models reveals how the CAE succeeds in representing the micro-Doppler signature.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2017.2771405