Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities

Radar-based activity recognition is a problem that has been of great interest due to applications such as border control and security, pedestrian identification for automotive safety, and remote health monitoring. This paper seeks to show the efficacy of micro-Doppler analysis to distinguish even th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on aerospace and electronic systems Jg. 54; H. 4; S. 1709 - 1723
Hauptverfasser: Seyfioglu, Mehmet Saygin, Ozbayoglu, Ahmet Murat, Gurbuz, Sevgi Zubeyde
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.08.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:0018-9251, 1557-9603
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Radar-based activity recognition is a problem that has been of great interest due to applications such as border control and security, pedestrian identification for automotive safety, and remote health monitoring. This paper seeks to show the efficacy of micro-Doppler analysis to distinguish even those gaits whose micro-Doppler signatures are not visually distinguishable. Moreover, a three-layer, deep convolutional autoencoder (CAE) is proposed, which utilizes unsupervised pretraining to initialize the weights in the subsequent convolutional layers. This architecture is shown to be more effective than other deep learning architectures, such as convolutional neural networks and autoencoders, as well as conventional classifiers employing predefined features, such as support vector machines (SVM), random forest, and extreme gradient boosting. Results show the performance of the proposed deep CAE yields a correct classification rate of 94.2% for micro-Doppler signatures of 12 different human activities measured indoors using a 4 GHz continuous wave radar-17.3% improvement over SVM.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9251
1557-9603
DOI:10.1109/TAES.2018.2799758