AutoGCN-Toward Generic Human Activity Recognition With Neural Architecture Search
This paper introduces AutoGCN, a generic Neural Architecture Search (NAS) algorithm for Human Activity Recognition (HAR) using Graph Convolution Networks (GCNs). HAR has enjoyed increased attention due to advances in deep learning, increased data availability, and enhanced computational capabilities...
Saved in:
| Published in: | IEEE access Vol. 12; pp. 39505 - 39516 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Piscataway
IEEE
2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 2169-3536, 2169-3536 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This paper introduces AutoGCN, a generic Neural Architecture Search (NAS) algorithm for Human Activity Recognition (HAR) using Graph Convolution Networks (GCNs). HAR has enjoyed increased attention due to advances in deep learning, increased data availability, and enhanced computational capabilities. Concurrently, GCNs have shown promising abilities in modeling relationships between body key points in a skeletal graph. Typically, domain experts develop dataset-specific GCN-based methods, which limits their applicability beyond the specific context. AutoGCN seeks to address this limitation by simultaneously searching for the ideal hyperparameters and architecture combination within a versatile search space using a reinforcement controller while balancing optimal exploration and exploitation behavior with a knowledge reservoir during the search process. We conduct extensive experiments on two large datasets focused on skeleton-based action recognition to assess the proposed algorithm's performance. Our experimental results demonstrate the effectiveness of AutoGCN in constructing optimal GCN architectures for HAR, outperforming conventional NAS and GCN methods, as well as random search. These findings highlight the significance of a diverse search space and an expressive input representation to achieve good model performance and generalizability. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2169-3536 2169-3536 |
| DOI: | 10.1109/ACCESS.2024.3377103 |