Multi‐stream adaptive spatial‐temporal attention graph convolutional network for skeleton‐based action recognition

Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional neural networks (CNNs) to non‐Euclidean graphs and achieve significant performance in skeleton‐based action recognition. However, existing G...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IET computer vision Ročník 16; číslo 2; s. 143 - 158
Hlavní autoři: Yu, Lubin, Tian, Lianfang, Du, Qiliang, Bhutto, Jameel Ahmed
Médium: Journal Article
Jazyk:angličtina
Vydáno: Stevenage John Wiley & Sons, Inc 01.03.2022
Wiley
Témata:
ISSN:1751-9632, 1751-9640
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional neural networks (CNNs) to non‐Euclidean graphs and achieve significant performance in skeleton‐based action recognition. However, existing GCN‐based models have several issues, such as the topology of the graph is defined based on the natural skeleton of the human body, which is fixed during training, and it may not be applied to different layers of the GCN model and diverse datasets. Besides, the higher‐order information of the joint data, for example, skeleton and dynamic information is not fully utilised. This work proposes a novel multi‐stream adaptive spatial‐temporal attention GCN model that overcomes the aforementioned issues. The method designs a learnable topology graph to adaptively adjust the connection relationship and strength, which is updated with training along with other network parameters. Simultaneously, the adaptive connection parameters are utilised to optimise the connection of the natural skeleton graph and the adaptive topology graph. The spatial‐temporal attention module is embedded in each graph convolution layer to ensure that the network focuses on the more critical joints and frames. A multi‐stream framework is built to integrate multiple inputs, which further improves the performance of the network. The final network achieves state‐of‐the‐art performance on both the NTU‐RGBD and Kinetics‐Skeleton action recognition datasets. The simulation results prove that the proposed method reveals better results than existing methods in all perspectives and that shows the superiority of the proposed method.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1751-9632
1751-9640
DOI:10.1049/cvi2.12075