Multi‐stream adaptive spatial‐temporal attention graph convolutional network for skeleton‐based action recognition
Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional neural networks (CNNs) to non‐Euclidean graphs and achieve significant performance in skeleton‐based action recognition. However, existing G...
Saved in:
| Published in: | IET computer vision Vol. 16; no. 2; pp. 143 - 158 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Stevenage
John Wiley & Sons, Inc
01.03.2022
Wiley |
| Subjects: | |
| ISSN: | 1751-9632, 1751-9640 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional neural networks (CNNs) to non‐Euclidean graphs and achieve significant performance in skeleton‐based action recognition. However, existing GCN‐based models have several issues, such as the topology of the graph is defined based on the natural skeleton of the human body, which is fixed during training, and it may not be applied to different layers of the GCN model and diverse datasets. Besides, the higher‐order information of the joint data, for example, skeleton and dynamic information is not fully utilised. This work proposes a novel multi‐stream adaptive spatial‐temporal attention GCN model that overcomes the aforementioned issues. The method designs a learnable topology graph to adaptively adjust the connection relationship and strength, which is updated with training along with other network parameters. Simultaneously, the adaptive connection parameters are utilised to optimise the connection of the natural skeleton graph and the adaptive topology graph. The spatial‐temporal attention module is embedded in each graph convolution layer to ensure that the network focuses on the more critical joints and frames. A multi‐stream framework is built to integrate multiple inputs, which further improves the performance of the network. The final network achieves state‐of‐the‐art performance on both the NTU‐RGBD and Kinetics‐Skeleton action recognition datasets. The simulation results prove that the proposed method reveals better results than existing methods in all perspectives and that shows the superiority of the proposed method. |
|---|---|
| AbstractList | Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional neural networks (CNNs) to non‐Euclidean graphs and achieve significant performance in skeleton‐based action recognition. However, existing GCN‐based models have several issues, such as the topology of the graph is defined based on the natural skeleton of the human body, which is fixed during training, and it may not be applied to different layers of the GCN model and diverse datasets. Besides, the higher‐order information of the joint data, for example, skeleton and dynamic information is not fully utilised. This work proposes a novel multi‐stream adaptive spatial‐temporal attention GCN model that overcomes the aforementioned issues. The method designs a learnable topology graph to adaptively adjust the connection relationship and strength, which is updated with training along with other network parameters. Simultaneously, the adaptive connection parameters are utilised to optimise the connection of the natural skeleton graph and the adaptive topology graph. The spatial‐temporal attention module is embedded in each graph convolution layer to ensure that the network focuses on the more critical joints and frames. A multi‐stream framework is built to integrate multiple inputs, which further improves the performance of the network. The final network achieves state‐of‐the‐art performance on both the NTU‐RGBD and Kinetics‐Skeleton action recognition datasets. The simulation results prove that the proposed method reveals better results than existing methods in all perspectives and that shows the superiority of the proposed method. Abstract Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional neural networks (CNNs) to non‐Euclidean graphs and achieve significant performance in skeleton‐based action recognition. However, existing GCN‐based models have several issues, such as the topology of the graph is defined based on the natural skeleton of the human body, which is fixed during training, and it may not be applied to different layers of the GCN model and diverse datasets. Besides, the higher‐order information of the joint data, for example, skeleton and dynamic information is not fully utilised. This work proposes a novel multi‐stream adaptive spatial‐temporal attention GCN model that overcomes the aforementioned issues. The method designs a learnable topology graph to adaptively adjust the connection relationship and strength, which is updated with training along with other network parameters. Simultaneously, the adaptive connection parameters are utilised to optimise the connection of the natural skeleton graph and the adaptive topology graph. The spatial‐temporal attention module is embedded in each graph convolution layer to ensure that the network focuses on the more critical joints and frames. A multi‐stream framework is built to integrate multiple inputs, which further improves the performance of the network. The final network achieves state‐of‐the‐art performance on both the NTU‐RGBD and Kinetics‐Skeleton action recognition datasets. The simulation results prove that the proposed method reveals better results than existing methods in all perspectives and that shows the superiority of the proposed method. |
| Author | Bhutto, Jameel Ahmed Du, Qiliang Yu, Lubin Tian, Lianfang |
| Author_xml | – sequence: 1 givenname: Lubin orcidid: 0000-0002-6831-0545 surname: Yu fullname: Yu, Lubin organization: South China University and Technology – sequence: 2 givenname: Lianfang surname: Tian fullname: Tian, Lianfang organization: Ministry of Natural Resources – sequence: 3 givenname: Qiliang surname: Du fullname: Du, Qiliang email: qldu@scut.edu.cn organization: Key Laboratory of Autonomous Systems and Network Control of Ministry of Education – sequence: 4 givenname: Jameel Ahmed surname: Bhutto fullname: Bhutto, Jameel Ahmed organization: South China University and Technology |
| BookMark | eNp9kc1u1DAUhSNUJNrChieIxA5pWv8m8RKN-BmpiA2wtW7s68FTTxxsz5TueASekSdpMgEWCLHy1b3nO_LRuajOhjhgVT2n5IoSoa7N0bMrykgrH1XntJV0pRpBzv7MnD2pLnLeESIbpcR59e39IRT_8_uPXBLCvgYLY_FHrPMIxUOYLgX3Y0wQaigFh-LjUG8TjF9qE4djDId5M10HLHcx3dYupjrfYsASh4nuIaOtwZy4hCZuBz_PT6vHDkLGZ7_ey-rTm9cf1-9WNx_ebtavblZGCCZXrIc5gWk5BYeOyq6fcopeto4jc8h7bnviGOmlaZAZ3inbCsctMtVNKL-sNouvjbDTY_J7SPc6gtenRUxbDal4E1Bby2RHGoOdaYURFASHxnElWmElgdnrxeI1pvj1gLnoXTykKXzWnCjGFBOETCqyqEyKOSd02vgCc-aSwAdNiZ670nNX-tTVhLz8C_n90X-K6SK-8wHv_6PU688btjAPTeGsEQ |
| CitedBy_id | crossref_primary_10_1142_S0218126625502391 crossref_primary_10_1007_s10489_022_04179_8 crossref_primary_10_3390_app14188185 crossref_primary_10_1007_s10489_024_05719_0 crossref_primary_10_1016_j_cag_2022_12_008 crossref_primary_10_1515_nleng_2025_0143 |
| Cites_doi | 10.1109/CVPR.2019.00810 10.1145/3394171.3413941 10.1109/ICMEW.2017.8026285 10.1109/Tip.2020.3028207 10.1016/j.patrec.2017.04.004 10.1109/CVPR.2016.331 10.1109/CVPR.2015.7299176 10.1007/978-3-030-01246-5_7 10.1109/CVPRW.2017.207 10.1109/CVPR42600.2020.01434 10.1109/ICMEW.2017.8026282 10.1109/CVPR.2017.143 10.1016/j.neucom.2018.06.071 10.1109/CVPR.2017.486 10.1109/ICCV.2017.233 10.1609/aaai.v34i03.5652 10.1007/978-3-030-01228-1_25 10.1145/3343031.3351170 10.1016/j.aei.2016.04.009 10.1609/aaai.v32i1.12328 10.1145/2907069 10.1109/TMM.2017.2666540 10.1609/aaai.v33i01.33018561 10.1007/s11042-015-3112-5 10.1109/Access.2020.3049029 10.1109/CVPR.2014.82 10.1109/CVPR.2016.115 |
| ContentType | Journal Article |
| Copyright | 2021 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. 2022. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| Copyright_xml | – notice: 2021 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. – notice: 2022. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| DBID | 24P AAYXX CITATION 3V. 7XB 8AL 8FE 8FG 8FK ABJCF ABUWG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ HCIFZ JQ2 K7- L6V M0N M7S P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS Q9U S0W DOA |
| DOI | 10.1049/cvi2.12075 |
| DatabaseName | Open Access: Wiley-Blackwell Open Access Journals CrossRef ProQuest Central (Corporate) ProQuest Central (purchase pre-March 2016) Computing Database (Alumni Edition) ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials AUTh Library subscriptions: ProQuest Central Technology collection ProQuest One ProQuest Central ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database ProQuest Engineering Collection Computing Database Engineering Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection ProQuest Central Basic DELNET Engineering & Technology Collection Directory of Open Access Journals |
| DatabaseTitle | CrossRef Computer Science Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest Central Korea ProQuest Central (New) Engineering Collection Advanced Technologies & Aerospace Collection ProQuest Computing Engineering Database ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest DELNET Engineering and Technology Collection Materials Science & Engineering Collection ProQuest One Academic ProQuest Central (Alumni) ProQuest One Academic (New) |
| DatabaseTitleList | CrossRef Computer Science Database |
| Database_xml | – sequence: 1 dbid: 24P name: Wiley Online Library Open Access url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html sourceTypes: Publisher – sequence: 2 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 3 dbid: BENPR name: Proquest Central Journals url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences |
| EISSN | 1751-9640 |
| EndPage | 158 |
| ExternalDocumentID | oai_doaj_org_article_dd25806ce8c74c41a43a6f39474d50a6 10_1049_cvi2_12075 CVI212075 |
| Genre | article |
| GrantInformation_xml | – fundername: Key‐Area Research and Development Program of Guangdong Province funderid: 2018B010109001; 2019B020214001; 2020B1111010002 – fundername: Guangdong Marine Economic Development Project funderid: GDNRC[2020]018 |
| GroupedDBID | .DC 0R~ 0ZK 1OC 24P 29I 3V. 5GY 6IK 8FE 8FG 8VB AAHHS AAHJG AAJGR ABJCF ABQXS ABUWG ACCFJ ACCMX ACESK ACGFO ACGFS ACIWK ACXQS ADEYR ADZOD AEEZP AEGXH AENEX AEQDE AFKRA AIWBW AJBDE ALMA_UNASSIGNED_HOLDINGS ALUQN ARAPS AVUZU AZQEC BENPR BGLVJ BPHCQ CCPQU CS3 DU5 DWQXO EBS EJD ESX GNUQQ GROUPED_DOAJ HCIFZ HZ~ IAO IFIPE IPLJI ITC J9A JAVBF K1G K6V K7- L6V LAI LXU M0N M43 M7S MCNEO MS~ NADUK NXXTH O9- OCL OK1 P62 PQQKQ PROAC PTHSS QWB RIE RNS RUI S0W UNMZH ZL0 ~ZZ AAMMB AAYXX AEFGJ AFFHD AGXDD AIDQK AIDYY CITATION IDLOA PHGZM PHGZT PQGLB WIN 7XB 8AL 8FK JQ2 PKEHL PQEST PQUKI PRINS Q9U |
| ID | FETCH-LOGICAL-c4425-2ba9640c731afef158b0494b57f3e2fe3b3db0f20b5c6e2c389d74f3de298a963 |
| IEDL.DBID | WIN |
| ISICitedReferencesCount | 7 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000712514300001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1751-9632 |
| IngestDate | Mon Nov 10 04:33:13 EST 2025 Wed Aug 13 08:10:38 EDT 2025 Wed Oct 29 21:24:00 EDT 2025 Tue Nov 18 22:38:58 EST 2025 Wed Jan 22 16:27:13 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 2 |
| Language | English |
| License | Attribution-NonCommercial-NoDerivs |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c4425-2ba9640c731afef158b0494b57f3e2fe3b3db0f20b5c6e2c389d74f3de298a963 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-6831-0545 |
| OpenAccessLink | https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fcvi2.12075 |
| PQID | 3092292400 |
| PQPubID | 1936354 |
| PageCount | 16 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_dd25806ce8c74c41a43a6f39474d50a6 proquest_journals_3092292400 crossref_citationtrail_10_1049_cvi2_12075 crossref_primary_10_1049_cvi2_12075 wiley_primary_10_1049_cvi2_12075_CVI212075 |
| PublicationCentury | 2000 |
| PublicationDate | March 2022 2022-03-00 20220301 2022-03-01 |
| PublicationDateYYYYMMDD | 2022-03-01 |
| PublicationDate_xml | – month: 03 year: 2022 text: March 2022 |
| PublicationDecade | 2020 |
| PublicationPlace | Stevenage |
| PublicationPlace_xml | – name: Stevenage |
| PublicationTitle | IET computer vision |
| PublicationYear | 2022 |
| Publisher | John Wiley & Sons, Inc Wiley |
| Publisher_xml | – name: John Wiley & Sons, Inc – name: Wiley |
| References | 2021; 9 2017; 92 2018; 315 2019; 33 2020 2016; 75 2019 2016; 30 2018 2017 2017; 19 2016 2015 2020; 34 2014 2016; 23 2020; 29 e_1_2_11_10_1 Li C. (e_1_2_11_12_1) 2017 e_1_2_11_31_1 e_1_2_11_30_1 e_1_2_11_36_1 e_1_2_11_35_1 e_1_2_11_34_1 e_1_2_11_11_1 e_1_2_11_7_1 e_1_2_11_29_1 e_1_2_11_6_1 e_1_2_11_28_1 e_1_2_11_5_1 e_1_2_11_27_1 e_1_2_11_4_1 e_1_2_11_3_1 e_1_2_11_2_1 Li B. (e_1_2_11_13_1) 2017 Cheng K. (e_1_2_11_32_1) e_1_2_11_20_1 e_1_2_11_25_1 e_1_2_11_24_1 Niepert M. (e_1_2_11_26_1) 2016 e_1_2_11_9_1 e_1_2_11_23_1 e_1_2_11_8_1 e_1_2_11_22_1 e_1_2_11_18_1 e_1_2_11_17_1 e_1_2_11_16_1 e_1_2_11_15_1 e_1_2_11_19_1 Du Y. (e_1_2_11_14_1) 2015 Liu Z.Y. (e_1_2_11_33_1) Shi L. (e_1_2_11_21_1) 2019 |
| References_xml | – start-page: 103 year: 2018 end-page: 118 – volume: 33 start-page: 8561 issue: 01 year: 2019 end-page: 8568 – volume: 9 start-page: 36475 year: 2021 end-page: 36484 article-title: Multi‐scale mixed dense graph convolution network for skeleton‐based action recognition publication-title: Ieee Access – start-page: 2014 year: 2016 end-page: 2023 – volume: 75 start-page: 12155 issue: 19 year: 2016 end-page: 12172 article-title: Video structured description technology based intelligence analysis of surveillance videos for public security applications publication-title: Multimed. Tool. Appl. – volume: 29 start-page: 9532 year: 2020 end-page: 9545 article-title: Skeleton‐based action recognition with multi‐stream adaptive graph convolutional networks publication-title: IEEE Trans Image Process – start-page: 5378 year: 2015 end-page: 5387 – start-page: 1110 year: 2015 end-page: 1118 – volume: 23 start-page: 38 issue: 3 year: 2016 end-page: 44 article-title: Research contributions in human‐computer interaction publication-title: Interactions – start-page: 14333 year: 2020 end-page: 14342 – start-page: 601 year: 2017 end-page: 604 – start-page: 180 end-page: 189 – start-page: 140 end-page: 149 – start-page: 588 year: 2014 end-page: 595 – start-page: 399 year: 2018 end-page: 417 – volume: 19 start-page: 1510 issue: 7 year: 2017 end-page: 1520 article-title: Sequential deep trajectory descriptor for action recognition with three‐stream CNN publication-title: IEEE Trans. Multimed. – start-page: 597 year: 2017 end-page: 600 – start-page: 601 year: 2019 end-page: 610 – volume: 92 start-page: 33 year: 2017 end-page: 40 article-title: Three‐stream CNNs for action recognition publication-title: Pattern. Recogn. Lett. – volume: 34 start-page: 2669 issue: 03 year: 2020 end-page: 2676 – volume: 30 start-page: 327 issue: 3 year: 2016 end-page: 336 article-title: Vision‐based action recognition of construction workers using dense trajectories publication-title: Adv. Eng. Inf. – start-page: 1010 year: 2016 end-page: 1019 – volume: 315 start-page: 221 year: 2018 end-page: 233 article-title: Action recognition using spatial‐optical data organization and sequential learning framework publication-title: Neurocomputing – start-page: 2117 year: 2017 end-page: 2126 – start-page: 3034 year: 2016 end-page: 3042 – start-page: 3288 year: 2017 end-page: 3297 – start-page: 12018 year: 2019 end-page: 12027 – start-page: 7912 year: 2019 end-page: 7921 – start-page: 7291 year: 2017 end-page: 7299 – start-page: 55 year: 2020 end-page: 63 – start-page: 1623 year: 2017 end-page: 1631 – ident: e_1_2_11_22_1 doi: 10.1109/CVPR.2019.00810 – ident: e_1_2_11_31_1 doi: 10.1145/3394171.3413941 – start-page: 597 volume-title: 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) year: 2017 ident: e_1_2_11_12_1 doi: 10.1109/ICMEW.2017.8026285 – start-page: 12018 volume-title: Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition year: 2019 ident: e_1_2_11_21_1 – ident: e_1_2_11_24_1 – ident: e_1_2_11_29_1 doi: 10.1109/Tip.2020.3028207 – ident: e_1_2_11_5_1 doi: 10.1016/j.patrec.2017.04.004 – ident: e_1_2_11_7_1 doi: 10.1109/CVPR.2016.331 – ident: e_1_2_11_9_1 doi: 10.1109/CVPR.2015.7299176 – ident: e_1_2_11_17_1 doi: 10.1007/978-3-030-01246-5_7 – ident: e_1_2_11_10_1 doi: 10.1109/CVPRW.2017.207 – start-page: 1110 volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition year: 2015 ident: e_1_2_11_14_1 – ident: e_1_2_11_30_1 doi: 10.1109/CVPR42600.2020.01434 – start-page: 601 volume-title: 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) year: 2017 ident: e_1_2_11_13_1 doi: 10.1109/ICMEW.2017.8026282 – start-page: 2014 volume-title: International Conference on Machine Learning year: 2016 ident: e_1_2_11_26_1 – ident: e_1_2_11_35_1 doi: 10.1109/CVPR.2017.143 – ident: e_1_2_11_28_1 doi: 10.1016/j.neucom.2018.06.071 – ident: e_1_2_11_11_1 doi: 10.1109/CVPR.2017.486 – ident: e_1_2_11_16_1 doi: 10.1109/ICCV.2017.233 – ident: e_1_2_11_23_1 doi: 10.1609/aaai.v34i03.5652 – ident: e_1_2_11_27_1 doi: 10.1007/978-3-030-01228-1_25 – ident: e_1_2_11_20_1 doi: 10.1145/3343031.3351170 – ident: e_1_2_11_4_1 doi: 10.1016/j.aei.2016.04.009 – ident: e_1_2_11_18_1 doi: 10.1609/aaai.v32i1.12328 – ident: e_1_2_11_2_1 doi: 10.1145/2907069 – ident: e_1_2_11_6_1 doi: 10.1109/TMM.2017.2666540 – ident: e_1_2_11_25_1 – start-page: 140 volume-title: 2020 Ieee/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr) ident: e_1_2_11_33_1 – ident: e_1_2_11_19_1 doi: 10.1609/aaai.v33i01.33018561 – ident: e_1_2_11_36_1 – ident: e_1_2_11_3_1 doi: 10.1007/s11042-015-3112-5 – start-page: 180 volume-title: 2020 Ieee/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr) ident: e_1_2_11_32_1 – ident: e_1_2_11_34_1 doi: 10.1109/Access.2020.3049029 – ident: e_1_2_11_8_1 doi: 10.1109/CVPR.2014.82 – ident: e_1_2_11_15_1 doi: 10.1109/CVPR.2016.115 |
| SSID | ssj0056994 |
| Score | 2.306493 |
| Snippet | Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional... Abstract Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize... |
| SourceID | doaj proquest crossref wiley |
| SourceType | Open Website Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 143 |
| SubjectTerms | Algorithms Artificial neural networks Cognition & reasoning computer graphics computer vision convolutional neural nets Datasets Deep learning Fourier transforms graph theory graphics processing units Graphs Human activity recognition Human body image colour analysis image motion analysis image thinning learning (artificial intelligence) Lie groups Methods Network topologies Neural networks Parameters Performance enhancement Semantics space‐time adaptive processing topology |
| SummonAdditionalLinks | – databaseName: Directory of Open Access Journals dbid: DOA link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PSxwxFA4iHry0Wlu6ukqgXipMnXnJJJOjiksLRTxY2VvIT5BuZ5eddfHon-Df2L-kSSa7KEh76W2YJEzIe8n7wnzvewgdK0LAlMoU4XAUBbWWFtrwqhBMhdtC45myqWrJd3511YzH4vpZqa_ICevlgfuFO7UW6qZkxjWGU0MrRYlingjKqa1LlcS2Sy5Wl6n-DK6ZSCUQQ2yMnyWwEial4tQs7-BLFcbUL0JRUux_ATOfg9UUbUY76E2Gifisn94u2nDtO_Q2Q0acN2S3hx5SAu3vx6eY86F-YWXVLB5guItMaTUJLVl8aoKjkmbiNuKkUo0j4Tw7Xmhtez44DiAWdz9DMAqgMIyOUc7iPvsBr9lG0_Y9-jG6vLn4WuRiCoWhYV8WoJVgtDScVMo7X9WNjtIwuuaeOPCOaGJ16aHUtWEOTAAyllNPrAPRhKHkA9psp637iHDAUMQC14pZTx1pNPAaLCMeqAkXMD5An1frKk1WGo8FLyYy_fGmQkYbyGSDAfq07jvr9TVe7XUezbPuETWx04vgKTJ7ivyXpwzQcGVcmTdqJ0kpAEQk0g7QSTL4X6YhL26_QXra_x8TOkDbEJMpEqNtiDYX83t3iLbMcnHXzY-SQ_8B4I_9SQ priority: 102 providerName: Directory of Open Access Journals – databaseName: Advanced Technologies & Aerospace Database dbid: P5Z link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Nb9QwELVKy4FLS_kQCwVZggtIocnYseNTBRVVK1VVD4AqLpY_K8SS3W4K4shP4DfyS-rxOkuRUC_cosRxIs145jl584aQF4YxcLVxVQqOquLe88o62VRKmLRb6KIwPnctOZYnJ93ZmTotH9yGQqscY2IO1H7m8Bv5LqsVgELG4978osKuUfh3tbTQuEU2GkhYP_nzaftpjMStULkRYsqQ-HAGozwpV7tpUnjdQI38wmsJKev2_wU2r0PWnHMOtv73be-SzYI26Zule2yTtdDfI1sFedKyrof75Eeuw_398xeWjpiv1HgzxzhIByRcm2m6UjSsphQFOTNFkmaxa4q89eK_6Wq_pJXThIXp8CXltIQt092YLD1dFlHQFWlp1j8gHw7evd8_rEpPhsrxtLwrsEYJXjvJGhNDbNrOosKMbWVkAWJglnlbR6ht60QAl_CQlzwyH0B16Vb2kKz3sz48IjRBMeZBWiN85IF1FmQLXrAI3KV9nJyQl6NhtCuC5dg3Y6rzj3OuNBpRZyNOyPPV2PlSpuOfo96ifVcjUFo7n5gtznVZqdp7aLtauNA5yR1vDGdGRKa45L6tjZiQndHkuqz3Qf-x94S8yh5zw2vo_Y9HkI8e3zzXE3IHsNoiU952yPrl4lt4Sm6775efh8Wz7OtXMfILqQ priority: 102 providerName: ProQuest |
| Title | Multi‐stream adaptive spatial‐temporal attention graph convolutional network for skeleton‐based action recognition |
| URI | https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fcvi2.12075 https://www.proquest.com/docview/3092292400 https://doaj.org/article/dd25806ce8c74c41a43a6f39474d50a6 |
| Volume | 16 |
| WOSCitedRecordID | wos000712514300001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 1751-9640 dateEnd: 20241231 omitProxy: false ssIdentifier: ssj0056994 issn: 1751-9632 databaseCode: DOA dateStart: 20210101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVPQU databaseName: Computer Science Database customDbUrl: eissn: 1751-9640 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0056994 issn: 1751-9632 databaseCode: K7- dateStart: 20210101 isFulltext: true titleUrlDefault: http://search.proquest.com/compscijour providerName: ProQuest – providerCode: PRVPQU databaseName: Engineering Database customDbUrl: eissn: 1751-9640 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0056994 issn: 1751-9632 databaseCode: M7S dateStart: 20210101 isFulltext: true titleUrlDefault: http://search.proquest.com providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest advanced technologies & aerospace journals customDbUrl: eissn: 1751-9640 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0056994 issn: 1751-9632 databaseCode: P5Z dateStart: 20210101 isFulltext: true titleUrlDefault: https://search.proquest.com/hightechjournals providerName: ProQuest – providerCode: PRVPQU databaseName: Proquest Central Journals customDbUrl: eissn: 1751-9640 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0056994 issn: 1751-9632 databaseCode: BENPR dateStart: 20210101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVWIB databaseName: Wiley Online Library Free Content customDbUrl: eissn: 1751-9640 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0056994 issn: 1751-9632 databaseCode: WIN dateStart: 20130101 isFulltext: true titleUrlDefault: https://onlinelibrary.wiley.com providerName: Wiley-Blackwell – providerCode: PRVWIB databaseName: Wiley Online Library Open Access customDbUrl: eissn: 1751-9640 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0056994 issn: 1751-9632 databaseCode: 24P dateStart: 20130101 isFulltext: true titleUrlDefault: https://authorservices.wiley.com/open-science/open-access/browse-journals.html providerName: Wiley-Blackwell |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1baxUxEA61-uBL6xVPrUtAXxRWdyfZZAO-2NJiUQ4Hr8WXJVcpPd1TztbSx_4Ef6O_xEl292hBBPElLJuEDUlm5kt25htCnmjGwBba5qgcVc6d47mxssyV0HhaqIPQLmUteSun0_rwUM3WyMsxFqbnh1hduEXJSPo6Crg2fRYSBLW4iPb8CJ6XgCYPFXDJk1R-PpiOargSKmVBRPMYv8xg5Cbl6sWvrlesUSLtv4I0f8eryeDsb_7fUG-RjQFo0lf9zrhN1nx7h2wOoJMOIt3dJRcpBPfH5fcYNaJPqHb6NKpA2kVfaz3HmoG-ak4jF2fyjqSJ55pGl_Vh62Jt23uUU4TBtDtGc4awEntHO-loHz9BV_5Ki_Ye-bi_92H3dT6kY8gtR8nOwWgleGElK3XwoaxqE8llTCUD8xA8M8yZIkBhKis8WIRCTvLAnAdVY1d2n6y3i9Y_IBRRGHMgjRYucM9qA7ICJ1gAbvEIJyfk6bgsjR24ymPKjHmT_plz1cQpbdKUTsjjVdvTnqHjj6124uquWkRW7fRisfzaDELaOAdVXQjrayu55aXmTIvAFJfcVYUWE7I97o1mEPWuYYUCUNEVd0KepV3wl2E0u58OID1t_Uvjh-QmxLCL5Pu2TdbPlt_8I3LDnp8ddcuMXAM-y8j1nb3p7F2WbhWwfCPzLHqyvsdyVn3JknT8BMARECU |
| linkProvider | Wiley-Blackwell |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3LbtQwFL0qBQk2lKcYKGAJWIAUmtiOHS8QgkLVUYdRFwVVbIzjB6o6zAyT8trxCXwJH8WX4OskQ5FQd12wixIniicn5x5nzr0X4L5hjNrc2CySo8q4czyrrSwyJUxcLVRBGJe6lozkeFzt76vdFfjZ58KgrbLnxETUbmbxG_kGyxWlCh2PT-cfM-wahf-u9i00Wljs-G9f4pKteTJ8EZ_vA0q3Xu5tbmddV4HM8gjQjNZGCZ5byQoTfCjKqsYaKXUpA_M0eFYzV-eB5nVphac2RnQneWDOU1XFU1m87hk4G2WEQiLYLd_2zF8KlRovxoiMk2W0L4fK1UacBH1c0Bz9jMcCYOoT8Je4PS6RU4zbWvvffp1LcLFT0-RZC__LsOKnV2CtU9ak463mKnxNeca_vv_A1BjzgRhn5sjzpEFDuZnEI12NrgnBgqPJAkpSMW-Cvvzu_YxHp61tnkStT5rDGLOjdo5noxhwpE0SIUtT1mx6DV6fyvyvw-p0NvU3gESpyRyVtREucM-qmsqSOsEC5TauU-UAHvZA0LYryI59QSY6GQO40gganUAzgHvLsfO2DMk_Rz1HPC1HYOnwtGO2eK87JtLO0bLKhfWVldzywnBmRGCKS-7K3IgBrPcQ0x2fNfoPvgbwKCH0hNvQm2-GNG3dPPlad-H89t6rkR4Nxzu34ALFzJJk71uH1aPFJ38bztnPRwfN4k56zwi8O23o_gb8s2iw |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3LbtQwFL0qLUJsKE8xUMASsAApTGI7drxAiD5GjFqNRghQd67jB0IMmWFSXjs-ge_hc_gSbMcZioS664JdlNhWnJz7Ss69F-CBIgTrXOnMK0eRUWNoVmteZIIpHy1UjikTu5Yc8MmkOjwU0zX42efCBFplrxOjojZzHb6RD0kuMBaB8Th0iRYx3R09W3zMQgep8Ke1b6fRQWTffvviw7f26XjXv-uHGI_2Xu28yFKHgUxTD9YM10owmmtOCuWsK8qqDvVS6pI7YrGzpCamzh3O61Izi7W37oZTR4zFovJTiV_3HGxwInIvXRvbe5Ppy94OlEzENozePoetE9wXR6Vi6LeEnxQ4D-zGE-Ywdg34y9U96TBHizfa_J-f1WW4lPxs9LwTjCuwZpursJl8bpQ0WnsNvsYM5F_ff4SkGfUBKaMWwQKgNlDN1cxfSdW7ZiiUIo3kUBTLfKPA2E-S6682HaEe-SgAte-9NfdetZ8d3ASDuvQRtKJrzZvr8PpM9n8D1pt5Y28C8k4oMZjXihlHLalqzEtsGHGYah_B8gE86kEhdSrVHjqGzGSkDFAhA4BkBNAA7q_GLroCJf8ctR2wtRoRiorHE_PlW5l0lDQGl1XOtK00p5oWihLFHBGUU1Pmig1gq4ebTJqulX-wNoDHEa2n3IbceTPG8ejW6WvdgwsesfJgPNm_DRdxSDmJvL8tWD9efrJ34Lz-fPyuXd5NQofg6Kyx-xuuzHKo |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi%E2%80%90stream+adaptive+spatial%E2%80%90temporal+attention+graph+convolutional+network+for+skeleton%E2%80%90based+action+recognition&rft.jtitle=IET+computer+vision&rft.au=Yu%2C+Lubin&rft.au=Tian%2C+Lianfang&rft.au=Du%2C+Qiliang&rft.au=Bhutto%2C+Jameel+Ahmed&rft.date=2022-03-01&rft.issn=1751-9632&rft.eissn=1751-9640&rft.volume=16&rft.issue=2&rft.spage=143&rft.epage=158&rft_id=info:doi/10.1049%2Fcvi2.12075&rft.externalDBID=n%2Fa&rft.externalDocID=10_1049_cvi2_12075 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1751-9632&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1751-9632&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1751-9632&client=summon |