Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities

Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-base...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Vol. 23; no. 4; p. 2182
Main Authors: Morshed, Md Golam, Sultana, Tangina, Alam, Aftab, Lee, Young-Koo
Format: Journal Article
Language:English
Published: Switzerland MDPI AG 15.02.2023
MDPI
Subjects:
ISSN:1424-8220, 1424-8220
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human–computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and hand-crafted techniques. We also explain a generic architecture to recognize human actions in the real world and its current prominent research topic. At long last, we are able to offer some study analysis concepts and proposals for academics. In-depth researchers of human action recognition will find this review an effective tool.
AbstractList Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human–computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and hand-crafted techniques. We also explain a generic architecture to recognize human actions in the real world and its current prominent research topic. At long last, we are able to offer some study analysis concepts and proposals for academics. In-depth researchers of human action recognition will find this review an effective tool.
Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human-computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and hand-crafted techniques. We also explain a generic architecture to recognize human actions in the real world and its current prominent research topic. At long last, we are able to offer some study analysis concepts and proposals for academics. In-depth researchers of human action recognition will find this review an effective tool.Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human-computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and hand-crafted techniques. We also explain a generic architecture to recognize human actions in the real world and its current prominent research topic. At long last, we are able to offer some study analysis concepts and proposals for academics. In-depth researchers of human action recognition will find this review an effective tool.
Audience Academic
Author Alam, Aftab
Sultana, Tangina
Morshed, Md Golam
Lee, Young-Koo
AuthorAffiliation 2 Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science & Technology University, Dinajpur 5200, Bangladesh
3 Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
1 Department of Computer Science and Engineering, Kyung Hee University, Global Campus, Yongin-si 17104, Republic of Korea
AuthorAffiliation_xml – name: 2 Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science & Technology University, Dinajpur 5200, Bangladesh
– name: 1 Department of Computer Science and Engineering, Kyung Hee University, Global Campus, Yongin-si 17104, Republic of Korea
– name: 3 Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
Author_xml – sequence: 1
  givenname: Md Golam
  orcidid: 0000-0002-6262-6952
  surname: Morshed
  fullname: Morshed, Md Golam
– sequence: 2
  givenname: Tangina
  orcidid: 0000-0002-3896-5591
  surname: Sultana
  fullname: Sultana, Tangina
– sequence: 3
  givenname: Aftab
  orcidid: 0000-0001-9222-2468
  surname: Alam
  fullname: Alam, Aftab
– sequence: 4
  givenname: Young-Koo
  orcidid: 0000-0003-2314-5395
  surname: Lee
  fullname: Lee, Young-Koo
BackLink https://www.ncbi.nlm.nih.gov/pubmed/36850778$$D View this record in MEDLINE/PubMed
BookMark eNplkk1v1DAQhiNURD_gwB9AkbiA1G0nnmRtc0BaKqCViipBe7YmtrN4ldhbO6nov8fLtlVb5INH43cez_j1frHjg7dF8baCI0QJx4kh1KwS7EWxV9WsngnGYOdRvFvsp7QCYIgoXhW7OBcNcC72ih-n00C-XOjRBV_-tDosvdvEn8pFeUl_gg_D7ewLJWvKX1O8sbeH5dXa0GjTYUnelBfrdYjjtCmy6XXxsqM-2Td3-0Fx9e3r5cnp7Pzi-9nJ4nymGxDjzBrMzbbASLYV6QZbYxpWgWEG29ZSBdrwxkqUtkaoUEje6qYTQhKCRcKD4mzLNYFWah3dQPFWBXLqXyLEpaI4Ot1bVdcGa-xIStA1oBG8qbUUsiICEFxm1uctaz21gzXa-jFS_wT69MS732oZbpSUc5QcMuDDHSCG68mmUQ0uadv35G2YkmJcAJ9jgyxL3z-TrsIUfX6qrOJyLkA2PKuOtqol5QGc70K-V-dl7OB0tr5zOb_gNWYf57gpePd4hIfe723OguOtQMeQUrSd0m6kjc2Z7HpVgdp8JPXwkXLFx2cV99D_tX8BitjFVQ
CitedBy_id crossref_primary_10_3390_s25165213
crossref_primary_10_3390_electronics12244925
crossref_primary_10_3390_make5040067
crossref_primary_10_1109_ACCESS_2024_3413822
crossref_primary_10_1109_TIM_2024_3374286
crossref_primary_10_1109_TBDATA_2023_3291558
crossref_primary_10_1109_ACCESS_2025_3553196
crossref_primary_10_1109_TCE_2025_3540853
crossref_primary_10_1016_j_asoc_2023_111166
crossref_primary_10_3390_math12203245
crossref_primary_10_1109_JSEN_2024_3422272
crossref_primary_10_3390_s24144508
crossref_primary_10_1177_14727978251369641
crossref_primary_10_1038_s41598_024_58074_y
crossref_primary_10_3390_sym15071438
crossref_primary_10_63463_kjes1137
crossref_primary_10_1109_ACCESS_2024_3441108
crossref_primary_10_3390_s25030896
crossref_primary_10_1109_ACCESS_2025_3563302
crossref_primary_10_1371_journal_pone_0322555
crossref_primary_10_3389_fphar_2025_1564157
crossref_primary_10_3390_s23115121
crossref_primary_10_1016_j_cviu_2024_104275
crossref_primary_10_3390_electronics13122294
crossref_primary_10_1007_s11042_025_21053_0
crossref_primary_10_36680_j_itcon_2025_038
crossref_primary_10_1109_JSEN_2024_3446673
crossref_primary_10_1016_j_imavis_2025_105689
crossref_primary_10_3390_app15148089
crossref_primary_10_1016_j_engappai_2024_107850
crossref_primary_10_1016_j_neucom_2024_127389
crossref_primary_10_1007_s11042_024_20484_5
crossref_primary_10_3390_ani14121774
crossref_primary_10_3390_make6020040
crossref_primary_10_3389_fphy_2025_1576591
crossref_primary_10_1109_ACCESS_2024_3378515
crossref_primary_10_1109_ACCESS_2023_3326259
crossref_primary_10_3390_technologies12100205
crossref_primary_10_3390_jsan14020042
crossref_primary_10_1016_j_compbiomed_2024_109578
crossref_primary_10_1371_journal_pone_0321754
crossref_primary_10_3389_fnins_2025_1588570
crossref_primary_10_3390_data9010009
crossref_primary_10_1109_ACCESS_2024_3437371
crossref_primary_10_1016_j_pmcj_2024_101976
crossref_primary_10_1016_j_knosys_2025_113594
crossref_primary_10_1016_j_rineng_2025_104194
crossref_primary_10_3389_fmats_2025_1560419
crossref_primary_10_1109_ACCESS_2024_3373199
crossref_primary_10_1007_s11042_023_16795_8
crossref_primary_10_1016_j_bspc_2025_108207
crossref_primary_10_1007_s10462_024_10934_9
crossref_primary_10_3390_asi7040059
crossref_primary_10_1016_j_imavis_2025_105544
crossref_primary_10_1016_j_ins_2024_120393
crossref_primary_10_1111_exsy_13680
crossref_primary_10_3389_fpubh_2025_1592228
crossref_primary_10_1155_2024_1052344
crossref_primary_10_1109_JSEN_2023_3317645
crossref_primary_10_1007_s10845_024_02434_y
crossref_primary_10_3390_s23135845
crossref_primary_10_3390_s24227351
Cites_doi 10.1016/j.image.2011.05.002
10.1109/TPAMI.2017.2691321
10.1109/CVPR.2014.326
10.1007/s10462-012-9356-9
10.1016/j.cviu.2017.01.011
10.1109/ICACI.2013.6748512
10.1016/j.neucom.2019.12.151
10.1109/CVPR.2017.387
10.1109/ICALIP.2016.7846646
10.1109/CVPR.2016.115
10.1016/j.patcog.2015.11.019
10.1109/CVPR52688.2022.00297
10.1109/TIP.2014.2302677
10.1109/CVPRW.2017.207
10.1016/j.patcog.2017.02.030
10.1016/j.patcog.2016.01.012
10.1109/CVPR.2016.333
10.1016/j.imavis.2014.04.005
10.1016/j.patcog.2017.10.033
10.1109/AVSS.2014.6918650
10.1145/3503161.3548546
10.1007/s11760-013-0501-y
10.1109/ICPR48806.2021.9412060
10.1109/CVPR.2018.00056
10.1109/TIP.2017.2785279
10.1016/j.future.2019.01.029
10.1109/CVPR.2016.213
10.1109/TPAMI.2016.2537340
10.1109/CVPR.2018.00762
10.1023/B:VISI.0000029664.99615.94
10.1016/j.patcog.2017.01.001
10.1145/2994258.2994268
10.1109/CVPR.2014.108
10.1109/JIOT.2018.2846359
10.1109/CVPRW.2010.5543273
10.1109/CVPR.2017.486
10.1109/ICCV.2011.6126543
10.1186/s40064-016-2876-z
10.3390/s22010323
10.1016/j.neucom.2016.03.024
10.1007/978-3-642-24082-9_92
10.1109/ICME.2016.7552941
10.1016/j.compeleceng.2018.01.037
10.1007/s00138-012-0450-4
10.1016/j.sigpro.2017.08.016
10.1109/JSEN.2016.2628346
10.1016/j.asoc.2021.107102
10.1109/AVSS.2016.7738021
10.1109/ACCESS.2021.3085708
10.1109/TMM.2017.2666540
10.1109/WACVW54805.2022.00017
10.1007/978-3-642-33709-3_62
10.1016/j.asoc.2017.06.007
10.1016/j.imavis.2017.01.010
10.1109/ICCVW.2017.369
10.5244/C.26.124
10.1109/WACV.2019.00015
10.1142/S0218001415550083
10.1016/j.engappai.2013.10.003
10.1145/2750858.2807520
10.1109/CVPR.2018.00127
10.1007/978-3-319-46487-9_50
10.1109/WACV51458.2022.00073
10.1609/aaai.v31i1.11212
10.1109/CVPR.2015.7299097
10.1007/s00371-015-1066-2
10.1145/3474085.3475572
10.1145/3158645
10.1016/j.patcog.2017.01.015
10.1109/ICCV.2009.5459361
10.3837/tiis.2015.02.016
10.1109/TII.2018.2808910
10.1109/TPAMI.2013.198
10.1016/j.inffus.2018.06.002
10.1109/JSEN.2018.2877662
10.1016/j.neucom.2018.03.077
10.1109/ICIP.2017.8296405
10.1109/JSEN.2017.2697077
10.1109/TCE.2011.6131162
10.1109/TCYB.2014.2350774
10.1016/j.robot.2015.11.013
10.1109/ICCV.2017.233
10.1109/TMM.2015.2404779
10.1016/j.imavis.2016.06.007
10.1109/CVPR.2017.498
10.1016/j.sigpro.2014.08.038
10.1109/CVPR.2017.143
10.1109/BigComp54360.2022.00055
10.1007/978-3-319-46484-8_2
10.1109/CVPRW.2013.78
10.18653/v1/D16-1264
10.24963/ijcai.2018/227
10.1109/TPAMI.2012.59
10.1109/CVPR52688.2022.00320
10.1016/j.jvcir.2013.03.001
10.1145/1390156.1390294
10.1016/j.imavis.2016.04.004
10.18653/v1/D18-1009
10.1109/TIP.2016.2552404
10.1016/j.eswa.2015.04.039
10.1007/978-3-030-01246-5_7
10.1109/ICCV.2017.256
10.1109/TPAMI.2022.3157033
10.3389/frobt.2015.00028
10.1109/CVPR.2017.52
10.1109/CVPR.2019.00371
10.1109/WACV.2015.150
10.1145/1922649.1922653
10.1109/SMC.2017.8122666
10.1007/s11554-013-0370-1
10.1609/aaai.v30i1.10451
10.1109/ICCV.2015.510
10.1109/CVPR52688.2022.00333
10.1016/j.neucom.2013.09.055
10.1109/ICCV.2017.317
10.1109/CVPR.2019.01230
10.1109/WACV.2017.24
10.1109/CVPR52688.2022.01930
10.1109/CVPR.2014.82
10.1109/CVPR.2013.98
10.1109/TSMC.2018.2850149
10.1109/TCSVT.2016.2628339
10.1109/CVPR52688.2022.01322
10.1109/WACV51458.2022.00086
10.1109/LSP.2017.2678539
10.1109/CVPR.2019.00132
10.1109/SURV.2012.110112.00192
10.1016/j.patcog.2021.108360
10.1109/TIP.2017.2718189
10.1016/S0338-9898(05)80195-7
10.1016/j.patcog.2017.10.034
10.18653/v1/2021.findings-acl.370
10.1016/j.patcog.2017.08.009
10.1109/CVPRW.2013.76
10.1162/neco.2006.18.7.1527
10.1109/AVSS.2010.63
10.1109/CVPR.2016.484
10.1109/WACV51458.2022.00090
10.1109/CVPR46437.2021.00193
10.1109/CVPR.2017.137
10.1109/CVPR42600.2020.01047
10.1609/aaai.v35i2.16235
10.1016/j.imavis.2022.104465
10.1109/CVPRW.2017.203
10.18653/v1/W18-5446
10.1109/TPAMI.2016.2565479
10.1109/ACCESS.2017.2759058
10.1109/ICPR.2014.602
10.1109/CVPR.2008.4587756
10.1016/j.patcog.2018.07.028
10.4304/jsw.8.9.2238-2245
10.1016/j.patcog.2016.08.003
10.1109/TPAMI.2019.2916873
10.1016/j.patrec.2014.04.011
10.1109/CVPR.2009.5206557
10.1109/CVPR.2014.223
10.1609/aaai.v32i1.12328
10.1109/CVPR.2015.7298698
10.1109/ICCV.2015.460
10.1109/CVPR52688.2022.01932
10.1016/j.eswa.2013.08.009
10.1109/TCSVT.2014.2333151
10.1007/s11760-014-0672-1
10.1109/CVPR52688.2022.00298
10.1007/978-3-319-08991-1_58
10.1109/TCYB.2015.2399172
10.1126/science.1127647
10.1016/j.patrec.2013.02.006
10.1007/s11042-018-6034-1
10.1109/CVPR42600.2020.00877
10.1109/ICCV.2013.441
10.1016/j.neucom.2015.11.005
10.1109/CVPR.2015.7298708
10.1007/s11263-016-0982-6
10.1016/j.patcog.2016.05.019
10.1007/978-3-319-10605-2_1
10.1109/CVPR52688.2022.01942
10.1109/ICCV.2011.6126443
10.1109/CONFLUENCE.2016.7508177
10.1016/j.cviu.2006.07.013
10.1109/WACVW54805.2022.00021
10.1109/5.726791
10.1109/ICCVW.2009.5457583
10.1109/ICPR.2004.1334462
10.1109/BigMM.2015.82
10.1109/THMS.2015.2504550
10.1109/AVSS.2018.8639122
10.1007/978-3-319-16181-5_3
10.1109/CVPR.2018.00054
10.1109/CVPR.2013.365
10.1109/TMM.2017.2786868
10.4236/etsn.2017.61001
10.1109/ICRA.2011.5980382
10.1007/978-3-319-16178-5_38
ContentType Journal Article
Copyright COPYRIGHT 2023 MDPI AG
2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2023 by the authors. 2023
Copyright_xml – notice: COPYRIGHT 2023 MDPI AG
– notice: 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2023 by the authors. 2023
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
DOI 10.3390/s23042182
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials - QC
ProQuest Central
ProQuest One Community College
ProQuest Central Korea
ProQuest Health & Medical Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
ProQuest Health & Medical Collection
Medical Database ProQuest
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central China
ProQuest Central
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList CrossRef
MEDLINE
Publicly Available Content Database



MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: ProQuest Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1424-8220
ExternalDocumentID oai_doaj_org_article_44d343fa990c403d8754c9891aa00879
PMC9963970
A743368637
36850778
10_3390_s23042182
Genre Journal Article
Review
GeographicLocations United States
New Jersey
Germany
GeographicLocations_xml – name: New Jersey
– name: Germany
– name: United States
GrantInformation_xml – fundername: Institute for Information & communications Technology Promotion (IITP)
  grantid: IITP-2022-2021-0-00859
– fundername: Korea Government (MSIT) (Artificial Intelligence Innovation Hub)
  grantid: 2021-0-02068
GroupedDBID ---
123
2WC
53G
5VS
7X7
88E
8FE
8FG
8FI
8FJ
AADQD
AAHBH
AAYXX
ABDBF
ABUWG
ACUHS
ADBBV
ADMLS
AENEX
AFFHD
AFKRA
AFZYC
ALMA_UNASSIGNED_HOLDINGS
BENPR
BPHCQ
BVXVI
CCPQU
CITATION
CS3
D1I
DU5
E3Z
EBD
ESX
F5P
FYUFA
GROUPED_DOAJ
GX1
HH5
HMCUK
HYE
IAO
ITC
KQ8
L6V
M1P
M48
MODMG
M~E
OK1
OVT
P2P
P62
PHGZM
PHGZT
PIMPY
PJZUB
PPXIY
PQQKQ
PROAC
PSQYO
RNS
RPM
TUS
UKHRP
XSB
~8M
ALIPV
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7XB
8FK
AZQEC
DWQXO
K9.
PKEHL
PQEST
PQUKI
PRINS
7X8
PUEGO
5PM
ID FETCH-LOGICAL-c508t-ed3182b02a9b1ac53bdd5210d2d3bbea10cd75e939e43013897bc5f889a30e3a3
IEDL.DBID 7X7
ISICitedReferencesCount 78
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000942101700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1424-8220
IngestDate Fri Oct 03 12:53:08 EDT 2025
Tue Nov 04 02:06:40 EST 2025
Thu Oct 02 11:51:24 EDT 2025
Tue Oct 07 07:48:59 EDT 2025
Tue Nov 04 18:14:17 EST 2025
Mon Jul 21 05:44:24 EDT 2025
Sat Nov 29 07:11:06 EST 2025
Tue Nov 18 22:11:24 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords deep learning
survey
human action recognition
hand-crafted
taxonomy
computer vision
Language English
License Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c508t-ed3182b02a9b1ac53bdd5210d2d3bbea10cd75e939e43013897bc5f889a30e3a3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Review-3
content type line 23
ORCID 0000-0002-3896-5591
0000-0003-2314-5395
0000-0001-9222-2468
0000-0002-6262-6952
OpenAccessLink https://www.proquest.com/docview/2779680957?pq-origsite=%requestingapplication%
PMID 36850778
PQID 2779680957
PQPubID 2032333
ParticipantIDs doaj_primary_oai_doaj_org_article_44d343fa990c403d8754c9891aa00879
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9963970
proquest_miscellaneous_2780763532
proquest_journals_2779680957
gale_infotracacademiconefile_A743368637
pubmed_primary_36850778
crossref_citationtrail_10_3390_s23042182
crossref_primary_10_3390_s23042182
PublicationCentury 2000
PublicationDate 20230215
PublicationDateYYYYMMDD 2023-02-15
PublicationDate_xml – month: 2
  year: 2023
  text: 20230215
  day: 15
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
– name: Basel
PublicationTitle Sensors (Basel, Switzerland)
PublicationTitleAlternate Sensors (Basel)
PublicationYear 2023
Publisher MDPI AG
MDPI
Publisher_xml – name: MDPI AG
– name: MDPI
References ref_137
ref_136
Hinton (ref_166) 2006; 18
ref_92
Ullah (ref_130) 2019; 96
ref_139
Zhang (ref_66) 2017; 26
ref_138
ref_90
Zhu (ref_11) 2016; 55
ref_250
Shahroudy (ref_242) 2017; 40
ref_252
ref_99
ref_133
Chaaraoui (ref_83) 2014; 41
ref_132
Zhang (ref_157) 2018; 14
ref_135
Liou (ref_150) 2014; 139
Li (ref_110) 2017; 24
Nunez (ref_62) 2018; 76
ref_246
Ji (ref_75) 2018; 143
ref_128
ref_248
ref_129
ref_120
Jalal (ref_49) 2017; 61
ref_240
ref_122
ref_243
ref_121
ref_124
Liu (ref_126) 2017; 68
ref_245
ref_123
ref_244
Prati (ref_56) 2019; 11
Chen (ref_65) 2017; 5
Kamel (ref_169) 2018; 49
Vishwakarma (ref_105) 2015; 42
Guo (ref_79) 2018; 76
ref_72
ref_71
ref_158
Zhou (ref_34) 2015; 17
ref_70
Krizhevsky (ref_164) 2012; 25
ref_153
Lowe (ref_81) 2004; 60
ref_152
Lun (ref_6) 2015; 29
ref_76
ref_155
ref_154
Cai (ref_2) 2018; 19
Tang (ref_118) 2022; 18
Ji (ref_127) 2012; 35
Liu (ref_104) 2015; 46
ref_160
Herath (ref_48) 2017; 60
ref_148
ref_82
ref_147
ref_80
ref_149
Khan (ref_170) 2022; 22
Nweke (ref_251) 2019; 46
ref_89
ref_142
ref_141
ref_87
ref_144
ref_143
ref_84
ref_145
Presti (ref_7) 2016; 53
Liu (ref_73) 2016; 175
ref_214
ref_213
ref_216
Everts (ref_97) 2014; 23
ref_215
ref_218
ref_219
Lara (ref_54) 2012; 15
ref_210
Weinland (ref_46) 2006; 104
ref_212
ref_211
Chaaraoui (ref_108) 2014; 31
Reddy (ref_217) 2013; 24
Saghafi (ref_188) 2012; 27
ref_203
ref_202
ref_205
ref_204
Wang (ref_27) 2015; 46
ref_207
Nguyen (ref_15) 2014; 25
ref_209
ref_208
Shi (ref_125) 2017; 19
ref_201
ref_200
Qiao (ref_77) 2017; 66
Hinton (ref_159) 1986; 1
Hou (ref_113) 2016; 28
ref_115
ref_236
ref_114
ref_235
ref_117
Xu (ref_224) 2015; 9
ref_116
ref_237
ref_119
Liu (ref_146) 2017; 27
ref_239
ref_111
ref_232
ref_234
ref_112
ref_233
Ponce (ref_60) 2019; 15
ref_225
ref_103
LeCun (ref_206) 1998; 86
ref_227
ref_226
ref_228
ref_109
ref_221
ref_220
ref_102
ref_222
Burghouts (ref_17) 2014; 8
Khan (ref_107) 2011; 57
Liu (ref_174) 2016; 55
Dawn (ref_13) 2016; 32
Xu (ref_51) 2017; 72
ref_14
Wenkai (ref_223) 2012; 6
Zhang (ref_35) 2015; 9
Rautaray (ref_249) 2015; 43
Chen (ref_5) 2013; 34
ref_19
ref_18
Vrigkas (ref_229) 2015; 2
Liu (ref_231) 2019; 42
Raman (ref_88) 2016; 199
Du (ref_140) 2016; 25
Jian (ref_184) 2019; 328
ref_25
ref_24
ref_23
ref_20
Abhayaratne (ref_95) 2021; 9
Yang (ref_50) 2016; 39
ref_29
ref_28
Zhang (ref_4) 2016; 60
ref_26
Ullah (ref_241) 2021; 435
Qi (ref_52) 2018; 6
Kong (ref_93) 2017; 123
Schuldt (ref_230) 2004; Volume 3
Yang (ref_21) 2014; 25
Liu (ref_74) 2017; 20
Kumar (ref_57) 2020; 79
Perez (ref_85) 2022; 122
Devanne (ref_78) 2014; 45
Minnen (ref_238) 2006; 4
Liu (ref_91) 2015; 112
Wu (ref_171) 2016; 38
Singh (ref_96) 2017; 65
Cornacchia (ref_55) 2016; 17
Ijjina (ref_131) 2016; 59
Song (ref_176) 2022; 45
Hejazi (ref_94) 2022; 123
Aggarwal (ref_10) 2014; 48
Chen (ref_22) 2016; 12
ref_58
ref_173
ref_172
ref_175
ref_177
ref_179
ref_178
Gharaee (ref_193) 2017; 59
ref_180
Wang (ref_86) 2013; 36
Nazir (ref_101) 2018; 72
Han (ref_8) 2017; 158
ref_182
ref_59
ref_181
Vincent (ref_156) 2010; 11
Zhang (ref_16) 2016; 5
Cippitelli (ref_1) 2017; 17
ref_69
ref_162
Yang (ref_61) 2019; 85
ref_68
ref_161
ref_67
ref_163
Abdallah (ref_47) 2018; 51
ref_64
ref_165
ref_63
ref_168
ref_167
Hinton (ref_151) 2006; 313
ref_36
ref_195
ref_194
ref_197
ref_33
ref_196
ref_32
ref_199
ref_31
ref_198
ref_30
Vishwakarma (ref_100) 2016; 77
ref_39
ref_38
ref_37
Alsinglawi (ref_53) 2017; 6
Zhu (ref_98) 2014; 32
Aggarwal (ref_12) 2011; 43
Hua (ref_134) 2019; 16
Gan (ref_106) 2013; 8
ref_183
ref_45
ref_186
ref_44
ref_185
ref_43
ref_42
ref_187
ref_41
ref_40
ref_189
ref_3
ref_191
ref_190
ref_192
ref_9
Ullah (ref_247) 2021; 103
References_xml – volume: 27
  start-page: 96
  year: 2012
  ident: ref_188
  article-title: Human action recognition using pose-based discriminant embedding
  publication-title: Signal Process. Image Commun.
  doi: 10.1016/j.image.2011.05.002
– ident: ref_190
– ident: ref_9
– volume: 40
  start-page: 1045
  year: 2017
  ident: ref_242
  article-title: Deep multimodal feature analysis for action recognition in rgb+ d videos
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2691321
– ident: ref_37
  doi: 10.1109/CVPR.2014.326
– volume: 25
  start-page: 1097
  year: 2012
  ident: ref_164
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: ref_178
– volume: 43
  start-page: 1
  year: 2015
  ident: ref_249
  article-title: Vision based hand gesture recognition for human computer interaction: A survey
  publication-title: Artif. Intell. Rev.
  doi: 10.1007/s10462-012-9356-9
– volume: 158
  start-page: 85
  year: 2017
  ident: ref_8
  article-title: Space-time representation of people based on 3D skeletal data: A review
  publication-title: Comput. Vis. Image Underst.
  doi: 10.1016/j.cviu.2017.01.011
– ident: ref_42
– ident: ref_153
  doi: 10.1109/ICACI.2013.6748512
– ident: ref_161
– volume: 435
  start-page: 321
  year: 2021
  ident: ref_241
  article-title: Conflux LSTMs network: A novel approach for multi-view action recognition
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2019.12.151
– ident: ref_148
  doi: 10.1109/CVPR.2017.387
– ident: ref_103
  doi: 10.1109/ICALIP.2016.7846646
– ident: ref_142
  doi: 10.1109/CVPR.2016.115
– volume: 53
  start-page: 130
  year: 2016
  ident: ref_7
  article-title: 3D skeleton-based human action classification: A survey
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2015.11.019
– ident: ref_120
  doi: 10.1109/CVPR52688.2022.00297
– volume: 23
  start-page: 1569
  year: 2014
  ident: ref_97
  article-title: Evaluation of color spatio-temporal interest points for human action recognition
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2014.2302677
– volume: 1
  start-page: 2
  year: 1986
  ident: ref_159
  article-title: Learning and relearning in Boltzmann machines
  publication-title: Parallel Distrib. Process. Explor. Microstruct. Cogn.
– ident: ref_133
  doi: 10.1109/CVPRW.2017.207
– volume: 68
  start-page: 346
  year: 2017
  ident: ref_126
  article-title: Enhanced skeleton visualization for view invariant human action recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.02.030
– volume: 59
  start-page: 199
  year: 2016
  ident: ref_131
  article-title: Human action recognition using genetic algorithms and convolutional neural networks
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.01.012
– ident: ref_143
  doi: 10.1109/CVPR.2016.333
– ident: ref_212
– volume: 32
  start-page: 453
  year: 2014
  ident: ref_98
  article-title: Evaluating spatiotemporal interest point features for depth-based action recognition
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2014.04.005
– volume: 76
  start-page: 80
  year: 2018
  ident: ref_62
  article-title: Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.10.033
– volume: 15
  start-page: 1550147719853987
  year: 2019
  ident: ref_60
  article-title: A concise review on sensor signal acquisition and transformation applied to human activity recognition and human–robot interaction
  publication-title: Int. J. Distrib. Sens. Netw.
– ident: ref_186
  doi: 10.1109/AVSS.2014.6918650
– volume: 11
  start-page: 5
  year: 2019
  ident: ref_56
  article-title: Sensors, vision and networks: From video surveillance to activity recognition and health monitoring
  publication-title: J. Ambient Intell. Smart Environ.
– ident: ref_177
  doi: 10.1145/3503161.3548546
– volume: 9
  start-page: 705
  year: 2015
  ident: ref_35
  article-title: Locating and recognizing multiple human actions by searching for maximum score subsequences
  publication-title: Signal Image Video Process.
  doi: 10.1007/s11760-013-0501-y
– ident: ref_240
  doi: 10.1109/ICPR48806.2021.9412060
– ident: ref_41
  doi: 10.1109/CVPR.2018.00056
– ident: ref_59
– volume: 27
  start-page: 1586
  year: 2017
  ident: ref_146
  article-title: Skeleton-based human action recognition with global context-aware attention LSTM networks
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2017.2785279
– volume: 96
  start-page: 386
  year: 2019
  ident: ref_130
  article-title: Action recognition using optimized deep autoencoder and CNN for surveillance data streams of non-stationary environments
  publication-title: Future Gener. Comput. Syst.
  doi: 10.1016/j.future.2019.01.029
– ident: ref_129
  doi: 10.1109/CVPR.2016.213
– volume: 38
  start-page: 1583
  year: 2016
  ident: ref_171
  article-title: Deep dynamic neural networks for multimodal gesture segmentation and recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2537340
– ident: ref_28
  doi: 10.1109/CVPR.2018.00762
– volume: 60
  start-page: 91
  year: 2004
  ident: ref_81
  article-title: Distinctive image features from scale-invariant keypoints
  publication-title: Int. J. Comput. Vis.
  doi: 10.1023/B:VISI.0000029664.99615.94
– volume: 65
  start-page: 265
  year: 2017
  ident: ref_96
  article-title: Graph formulation of video activities for abnormal activity recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.01.001
– ident: ref_211
– ident: ref_20
  doi: 10.1145/2994258.2994268
– ident: ref_69
  doi: 10.1109/CVPR.2014.108
– ident: ref_234
– volume: 6
  start-page: 1384
  year: 2018
  ident: ref_52
  article-title: A hybrid hierarchical framework for gym physical activity recognition and measurement using wearable sensors
  publication-title: IEEE Internet Things J.
  doi: 10.1109/JIOT.2018.2846359
– ident: ref_63
  doi: 10.1109/CVPRW.2010.5543273
– ident: ref_111
  doi: 10.1109/CVPR.2017.486
– ident: ref_235
  doi: 10.1109/ICCV.2011.6126543
– ident: ref_200
– volume: 5
  start-page: 1
  year: 2016
  ident: ref_16
  article-title: Multi-surface analysis for human action recognition in video
  publication-title: SpringerPlus
  doi: 10.1186/s40064-016-2876-z
– volume: 22
  start-page: 323
  year: 2022
  ident: ref_170
  article-title: Human activity recognition via hybrid deep learning based model
  publication-title: Sensors
  doi: 10.3390/s22010323
– volume: 199
  start-page: 163
  year: 2016
  ident: ref_88
  article-title: Activity recognition using a supervised non-parametric hierarchical HMM
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2016.03.024
– ident: ref_221
  doi: 10.1007/978-3-642-24082-9_92
– ident: ref_250
  doi: 10.1109/ICME.2016.7552941
– volume: 72
  start-page: 660
  year: 2018
  ident: ref_101
  article-title: Evaluating a bag-of-visual features approach using spatio-temporal features for action recognition
  publication-title: Comput. Electr. Eng.
  doi: 10.1016/j.compeleceng.2018.01.037
– volume: 24
  start-page: 971
  year: 2013
  ident: ref_217
  article-title: Recognizing 50 human action categories of web videos
  publication-title: Mach. Vis. Appl.
  doi: 10.1007/s00138-012-0450-4
– volume: 143
  start-page: 56
  year: 2018
  ident: ref_75
  article-title: Skeleton embedded motion body partition for human action recognition using depth sequences
  publication-title: Signal Process.
  doi: 10.1016/j.sigpro.2017.08.016
– volume: 17
  start-page: 386
  year: 2016
  ident: ref_55
  article-title: A survey on activity detection and classification using wearable sensors
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2016.2628346
– volume: 103
  start-page: 107102
  year: 2021
  ident: ref_247
  article-title: Efficient activity recognition using lightweight CNN and DS-GRU network for surveillance applications
  publication-title: Appl. Soft Comput.
  doi: 10.1016/j.asoc.2021.107102
– ident: ref_248
  doi: 10.1109/AVSS.2016.7738021
– volume: 9
  start-page: 82686
  year: 2021
  ident: ref_95
  article-title: Making sense of neuromorphic event data for human action recognition
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3085708
– volume: 19
  start-page: 1510
  year: 2017
  ident: ref_125
  article-title: Sequential deep trajectory descriptor for action recognition with three-stream CNN
  publication-title: IEEE Trans. Multimed.
  doi: 10.1109/TMM.2017.2666540
– ident: ref_154
– ident: ref_239
– ident: ref_160
– ident: ref_122
  doi: 10.1109/WACVW54805.2022.00017
– ident: ref_71
  doi: 10.1007/978-3-642-33709-3_62
– volume: 59
  start-page: 574
  year: 2017
  ident: ref_193
  article-title: First and second order dynamics in a hierarchical SOM system for action recognition
  publication-title: Appl. Soft Comput.
  doi: 10.1016/j.asoc.2017.06.007
– volume: 60
  start-page: 4
  year: 2017
  ident: ref_48
  article-title: Going deeper into action recognition: A survey
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2017.01.010
– ident: ref_168
  doi: 10.1109/ICCVW.2017.369
– volume: 6
  start-page: 339
  year: 2012
  ident: ref_223
  article-title: Continuous gesture trajectory recognition system based on computer vision
  publication-title: Int. J. Appl. Math. Inf. Sci.
– ident: ref_158
  doi: 10.5244/C.26.124
– ident: ref_137
  doi: 10.1109/WACV.2019.00015
– volume: 29
  start-page: 1555008
  year: 2015
  ident: ref_6
  article-title: A survey of applications and human motion recognition with microsoft kinect
  publication-title: Int. J. Pattern Recognit. Artif. Intell.
  doi: 10.1142/S0218001415550083
– volume: 31
  start-page: 116
  year: 2014
  ident: ref_108
  article-title: Optimizing human action recognition based on a cooperative coevolutionary algorithm
  publication-title: Eng. Appl. Artif. Intell.
  doi: 10.1016/j.engappai.2013.10.003
– ident: ref_40
  doi: 10.1145/2750858.2807520
– ident: ref_117
  doi: 10.1109/CVPR.2018.00127
– ident: ref_26
  doi: 10.1007/978-3-319-46487-9_50
– ident: ref_202
  doi: 10.1109/WACV51458.2022.00073
– ident: ref_147
  doi: 10.1609/aaai.v31i1.11212
– ident: ref_226
  doi: 10.1109/CVPR.2015.7299097
– volume: 32
  start-page: 289
  year: 2016
  ident: ref_13
  article-title: A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-015-1066-2
– ident: ref_139
– ident: ref_246
  doi: 10.1145/3474085.3475572
– ident: ref_227
– volume: 51
  start-page: 1
  year: 2018
  ident: ref_47
  article-title: Activity recognition with evolving data streams: A review
  publication-title: ACM Comput. Surv. (CSUR)
  doi: 10.1145/3158645
– volume: 66
  start-page: 202
  year: 2017
  ident: ref_77
  article-title: Learning discriminative trajectorylet detector sets for accurate skeleton-based action recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.01.015
– ident: ref_228
  doi: 10.1109/ICCV.2009.5459361
– ident: ref_87
– volume: 9
  start-page: 763
  year: 2015
  ident: ref_224
  article-title: A novel method for hand posture recognition based on depth information descriptor
  publication-title: KSII Trans. Internet Inf. Syst. (TIIS)
  doi: 10.3837/tiis.2015.02.016
– volume: 14
  start-page: 3170
  year: 2018
  ident: ref_157
  article-title: An efficient deep learning model to predict cloud workload for industry informatics
  publication-title: IEEE Trans. Ind. Inform.
  doi: 10.1109/TII.2018.2808910
– volume: 36
  start-page: 914
  year: 2013
  ident: ref_86
  article-title: Learning actionlet ensemble for 3D human action recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2013.198
– volume: 46
  start-page: 147
  year: 2019
  ident: ref_251
  article-title: Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2018.06.002
– ident: ref_162
– volume: 19
  start-page: 1508
  year: 2018
  ident: ref_2
  article-title: Sensing-enhanced therapy system for assessing children with autism spectrum disorders: A feasibility study
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2018.2877662
– volume: 328
  start-page: 147
  year: 2019
  ident: ref_184
  article-title: Deep key frame extraction for sport training
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2018.03.077
– ident: ref_76
– ident: ref_192
  doi: 10.1109/ICIP.2017.8296405
– ident: ref_82
– volume: 11
  start-page: 3371
  year: 2010
  ident: ref_156
  article-title: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion
  publication-title: J. Mach. Learn. Res.
– volume: 17
  start-page: 3585
  year: 2017
  ident: ref_1
  article-title: Radar and RGB-depth sensors for fall detection: A review
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2017.2697077
– volume: 57
  start-page: 1843
  year: 2011
  ident: ref_107
  article-title: Abnormal human activity recognition system based on R-transform and kernel discriminant technique for elderly home care
  publication-title: IEEE Trans. Consum. Electron.
  doi: 10.1109/TCE.2011.6131162
– volume: 45
  start-page: 1340
  year: 2014
  ident: ref_78
  article-title: 3-d human action recognition by shape analysis of motion trajectories on riemannian manifold
  publication-title: IEEE Trans. Cybern.
  doi: 10.1109/TCYB.2014.2350774
– volume: 77
  start-page: 25
  year: 2016
  ident: ref_100
  article-title: A proposed unified framework for the recognition of human activity by exploiting the characteristics of action dynamics
  publication-title: Robot. Auton. Syst.
  doi: 10.1016/j.robot.2015.11.013
– ident: ref_145
  doi: 10.1109/ICCV.2017.233
– ident: ref_243
– volume: 17
  start-page: 512
  year: 2015
  ident: ref_34
  article-title: Learning spatial and temporal extents of human actions for action detection
  publication-title: IEEE Trans. Multimed.
  doi: 10.1109/TMM.2015.2404779
– volume: 55
  start-page: 42
  year: 2016
  ident: ref_11
  article-title: From handcrafted to learned representations for human action recognition: A survey
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2016.06.007
– ident: ref_173
  doi: 10.1109/CVPR.2017.498
– ident: ref_167
– volume: 112
  start-page: 74
  year: 2015
  ident: ref_91
  article-title: Coupled hidden conditional random fields for RGB-D human action recognition
  publication-title: Signal Process.
  doi: 10.1016/j.sigpro.2014.08.038
– ident: ref_30
  doi: 10.1109/CVPR.2017.143
– ident: ref_189
– ident: ref_33
  doi: 10.1109/BigComp54360.2022.00055
– ident: ref_183
  doi: 10.1007/978-3-319-46484-8_2
– ident: ref_89
  doi: 10.1109/CVPRW.2013.78
– ident: ref_197
  doi: 10.18653/v1/D16-1264
– ident: ref_210
– ident: ref_195
– ident: ref_114
  doi: 10.24963/ijcai.2018/227
– volume: 35
  start-page: 221
  year: 2012
  ident: ref_127
  article-title: 3D convolutional neural networks for human action recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2012.59
– ident: ref_215
  doi: 10.1109/CVPR52688.2022.00320
– volume: 25
  start-page: 2
  year: 2014
  ident: ref_21
  article-title: Effective 3d action recognition using eigenjoints
  publication-title: J. Vis. Commun. Image Represent.
  doi: 10.1016/j.jvcir.2013.03.001
– ident: ref_155
  doi: 10.1145/1390156.1390294
– volume: 55
  start-page: 93
  year: 2016
  ident: ref_174
  article-title: 3D-based deep convolutional neural network for action recognition with depth sequences
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2016.04.004
– ident: ref_198
  doi: 10.18653/v1/D18-1009
– volume: 25
  start-page: 3010
  year: 2016
  ident: ref_140
  article-title: Representation learning of temporal dynamics for skeleton-based action recognition
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2016.2552404
– volume: 42
  start-page: 6957
  year: 2015
  ident: ref_105
  article-title: Hybrid classifier based human activity recognition using the silhouette and cells
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2015.04.039
– ident: ref_149
  doi: 10.1007/978-3-030-01246-5_7
– ident: ref_29
  doi: 10.1109/ICCV.2017.256
– volume: 45
  start-page: 1474
  year: 2022
  ident: ref_176
  article-title: Constructing stronger and faster baselines for skeleton-based action recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2022.3157033
– volume: 2
  start-page: 28
  year: 2015
  ident: ref_229
  article-title: A review of human activity recognition methods
  publication-title: Front. Robot. AI
  doi: 10.3389/frobt.2015.00028
– ident: ref_172
  doi: 10.1109/CVPR.2017.52
– ident: ref_128
  doi: 10.1109/CVPR.2019.00371
– ident: ref_39
– ident: ref_109
  doi: 10.1109/WACV.2015.150
– volume: 43
  start-page: 1
  year: 2011
  ident: ref_12
  article-title: Human activity analysis: A review
  publication-title: ACM Comput. Surv. (CSUR)
  doi: 10.1145/1922649.1922653
– ident: ref_132
  doi: 10.1109/SMC.2017.8122666
– volume: 12
  start-page: 155
  year: 2016
  ident: ref_22
  article-title: Real-time human action recognition based on depth motion maps
  publication-title: J. Real-Time Image Process.
  doi: 10.1007/s11554-013-0370-1
– ident: ref_144
  doi: 10.1609/aaai.v30i1.10451
– ident: ref_25
  doi: 10.1109/ICCV.2015.510
– ident: ref_205
  doi: 10.1109/CVPR52688.2022.00333
– volume: 139
  start-page: 84
  year: 2014
  ident: ref_150
  article-title: Autoencoder for words
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2013.09.055
– ident: ref_32
  doi: 10.1109/ICCV.2017.317
– ident: ref_44
  doi: 10.1109/CVPR.2019.01230
– ident: ref_141
  doi: 10.1109/WACV.2017.24
– ident: ref_194
  doi: 10.1109/CVPR52688.2022.01930
– ident: ref_244
– ident: ref_84
  doi: 10.1109/CVPR.2014.82
– ident: ref_237
– ident: ref_19
  doi: 10.1109/CVPR.2013.98
– volume: 18
  start-page: 1
  year: 2022
  ident: ref_118
  article-title: Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based Action Recognition
  publication-title: ACM Trans. Multimed. Comput. Commun. Appl. (TOMM)
– ident: ref_187
– volume: 49
  start-page: 1806
  year: 2018
  ident: ref_169
  article-title: Deep convolutional neural networks for human action recognition using depth maps and postures
  publication-title: IEEE Trans. Syst. Man, Cybern. Syst.
  doi: 10.1109/TSMC.2018.2850149
– volume: 28
  start-page: 807
  year: 2016
  ident: ref_113
  article-title: Skeleton optical spectra-based action recognition using convolutional neural networks
  publication-title: IEEE Trans. Circuits Syst. Video Technol.
  doi: 10.1109/TCSVT.2016.2628339
– ident: ref_204
  doi: 10.1109/CVPR52688.2022.01322
– ident: ref_203
  doi: 10.1109/WACV51458.2022.00086
– volume: 24
  start-page: 624
  year: 2017
  ident: ref_110
  article-title: Joint distance maps based action recognition with convolutional neural networks
  publication-title: IEEE Signal Process. Lett.
  doi: 10.1109/LSP.2017.2678539
– ident: ref_180
  doi: 10.1109/CVPR.2019.00132
– volume: 15
  start-page: 1192
  year: 2012
  ident: ref_54
  article-title: A survey on human activity recognition using wearable sensors
  publication-title: IEEE Commun. Surv. Tutor.
  doi: 10.1109/SURV.2012.110112.00192
– volume: 122
  start-page: 108360
  year: 2022
  ident: ref_85
  article-title: Skeleton-based relational reasoning for group activity analysis
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2021.108360
– ident: ref_67
– volume: 26
  start-page: 4648
  year: 2017
  ident: ref_66
  article-title: Action recognition using 3D histograms of texture and a multi-class boosting classifier
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2017.2718189
– ident: ref_222
  doi: 10.1016/S0338-9898(05)80195-7
– volume: 76
  start-page: 137
  year: 2018
  ident: ref_79
  article-title: DSRF: A flexible trajectory descriptor for articulated human action recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.10.034
– volume: 16
  start-page: 3100
  year: 2019
  ident: ref_134
  article-title: Encoding pose features to images with data augmentation for 3-D action recognition
  publication-title: IEEE Trans. Ind. Inform.
– ident: ref_220
– ident: ref_163
– ident: ref_182
– ident: ref_209
  doi: 10.18653/v1/2021.findings-acl.370
– volume: 72
  start-page: 494
  year: 2017
  ident: ref_51
  article-title: Hand action detection from ego-centric depth sequences with error-correcting Hough transform
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.08.009
– ident: ref_225
– ident: ref_43
  doi: 10.1109/CVPRW.2013.76
– volume: 18
  start-page: 1527
  year: 2006
  ident: ref_166
  article-title: A fast learning algorithm for deep belief nets
  publication-title: Neural Comput.
  doi: 10.1162/neco.2006.18.7.1527
– ident: ref_112
– ident: ref_232
  doi: 10.1109/AVSS.2010.63
– ident: ref_116
  doi: 10.1109/CVPR.2016.484
– ident: ref_199
– ident: ref_119
  doi: 10.1109/WACV51458.2022.00090
– ident: ref_214
– ident: ref_123
  doi: 10.1109/CVPR46437.2021.00193
– ident: ref_208
– ident: ref_115
  doi: 10.1109/CVPR.2017.137
– ident: ref_179
  doi: 10.1109/CVPR42600.2020.01047
– ident: ref_124
  doi: 10.1609/aaai.v35i2.16235
– volume: 123
  start-page: 104465
  year: 2022
  ident: ref_94
  article-title: Handcrafted localized phase features for human action recognition
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2022.104465
– ident: ref_135
  doi: 10.1109/CVPRW.2017.203
– ident: ref_196
  doi: 10.18653/v1/W18-5446
– volume: 39
  start-page: 1028
  year: 2016
  ident: ref_50
  article-title: Super normal vector for human activity recognition with depth cameras
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2565479
– ident: ref_165
– volume: 5
  start-page: 22590
  year: 2017
  ident: ref_65
  article-title: Multi-temporal depth motion maps-based local binary patterns for 3-D human action recognition
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2017.2759058
– ident: ref_245
– ident: ref_70
  doi: 10.1109/ICPR.2014.602
– ident: ref_236
  doi: 10.1109/CVPR.2008.4587756
– volume: 85
  start-page: 1
  year: 2019
  ident: ref_61
  article-title: Asymmetric 3d convolutional neural networks for action recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2018.07.028
– ident: ref_136
– volume: 8
  start-page: 2238
  year: 2013
  ident: ref_106
  article-title: Human Action Recognition Using APJ3D and Random Forests
  publication-title: J. Softw.
  doi: 10.4304/jsw.8.9.2238-2245
– volume: 61
  start-page: 295
  year: 2017
  ident: ref_49
  article-title: Robust human activity recognition from depth video using spatiotemporal multi-fused features
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.08.003
– ident: ref_90
– volume: 42
  start-page: 2684
  year: 2019
  ident: ref_231
  article-title: Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2019.2916873
– volume: 48
  start-page: 70
  year: 2014
  ident: ref_10
  article-title: Human activity recognition from 3d data: A review
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2014.04.011
– ident: ref_23
– ident: ref_207
– ident: ref_58
– ident: ref_216
  doi: 10.1109/CVPR.2009.5206557
– ident: ref_38
  doi: 10.1109/CVPR.2014.223
– ident: ref_31
  doi: 10.1609/aaai.v32i1.12328
– ident: ref_233
  doi: 10.1109/CVPR.2015.7298698
– ident: ref_138
  doi: 10.1109/ICCV.2015.460
– ident: ref_175
  doi: 10.1109/CVPR52688.2022.01932
– volume: 41
  start-page: 786
  year: 2014
  ident: ref_83
  article-title: Evolutionary joint selection to improve human action recognition with RGB-D devices
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2013.08.009
– volume: 25
  start-page: 77
  year: 2014
  ident: ref_15
  article-title: STAP: Spatial-temporal attention-aware pooling for action recognition
  publication-title: IEEE Trans. Circuits Syst. Video Technol.
  doi: 10.1109/TCSVT.2014.2333151
– volume: 8
  start-page: 191
  year: 2014
  ident: ref_17
  article-title: Instantaneous threat detection based on a semantic representation of activities, zones and trajectories
  publication-title: Signal Image Video Process.
  doi: 10.1007/s11760-014-0672-1
– ident: ref_80
  doi: 10.1109/CVPR52688.2022.00298
– ident: ref_14
  doi: 10.1007/978-3-319-08991-1_58
– volume: 46
  start-page: 158
  year: 2015
  ident: ref_104
  article-title: Learning spatio-temporal representations for action recognition: A genetic programming approach
  publication-title: IEEE Trans. Cybern.
  doi: 10.1109/TCYB.2015.2399172
– volume: 313
  start-page: 504
  year: 2006
  ident: ref_151
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: Science
  doi: 10.1126/science.1127647
– ident: ref_201
– volume: 34
  start-page: 1995
  year: 2013
  ident: ref_5
  article-title: A survey of human motion analysis using depth imagery
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2013.02.006
– volume: 79
  start-page: 3543
  year: 2020
  ident: ref_57
  article-title: Human activity recognition in egocentric video using HOG, GiST and color features
  publication-title: Multimed. Tools Appl.
  doi: 10.1007/s11042-018-6034-1
– ident: ref_213
  doi: 10.1109/CVPR42600.2020.00877
– ident: ref_24
– volume: 4
  start-page: 141
  year: 2006
  ident: ref_238
  article-title: Performance metrics and evaluation issues for continuous activity recognition
  publication-title: Perform. Metrics Intell. Syst.
– ident: ref_218
– ident: ref_18
  doi: 10.1109/ICCV.2013.441
– volume: 175
  start-page: 747
  year: 2016
  ident: ref_73
  article-title: Depth context: A new descriptor for human activity recognition by using sole depth sequences
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2015.11.005
– ident: ref_92
  doi: 10.1109/CVPR.2015.7298708
– volume: 123
  start-page: 350
  year: 2017
  ident: ref_93
  article-title: Max-margin heterogeneous information machine for RGB-D action recognition
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-016-0982-6
– volume: 60
  start-page: 86
  year: 2016
  ident: ref_4
  article-title: RGB-D-based action recognition datasets: A survey
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.05.019
– ident: ref_152
  doi: 10.1007/978-3-319-10605-2_1
– ident: ref_191
  doi: 10.1109/CVPR52688.2022.01942
– ident: ref_99
  doi: 10.1109/ICCV.2011.6126443
– ident: ref_102
– ident: ref_219
  doi: 10.1109/CONFLUENCE.2016.7508177
– volume: 104
  start-page: 249
  year: 2006
  ident: ref_46
  article-title: Free viewpoint action recognition using motion history volumes
  publication-title: Comput. Vis. Image Underst.
  doi: 10.1016/j.cviu.2006.07.013
– ident: ref_121
  doi: 10.1109/WACVW54805.2022.00021
– volume: 86
  start-page: 2278
  year: 1998
  ident: ref_206
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
  doi: 10.1109/5.726791
– ident: ref_45
  doi: 10.1109/ICCVW.2009.5457583
– volume: Volume 3
  start-page: 32
  year: 2004
  ident: ref_230
  article-title: Recognizing human actions: A local SVM approach
  publication-title: Proceedings of the 17th International Conference on Pattern Recognition 2004, ICPR 2004
  doi: 10.1109/ICPR.2004.1334462
– ident: ref_64
  doi: 10.1109/BigMM.2015.82
– volume: 46
  start-page: 498
  year: 2015
  ident: ref_27
  article-title: Action recognition from depth maps using deep convolutional neural networks
  publication-title: IEEE Trans. Hum.-Mach. Syst.
  doi: 10.1109/THMS.2015.2504550
– ident: ref_181
  doi: 10.1109/AVSS.2018.8639122
– ident: ref_3
  doi: 10.1007/978-3-319-16181-5_3
– ident: ref_185
  doi: 10.1109/CVPR.2018.00054
– ident: ref_72
  doi: 10.1109/CVPR.2013.365
– ident: ref_252
– volume: 20
  start-page: 1932
  year: 2017
  ident: ref_74
  article-title: Robust 3D action recognition through sampling local appearances and global distributions
  publication-title: IEEE Trans. Multimed.
  doi: 10.1109/TMM.2017.2786868
– volume: 6
  start-page: 1
  year: 2017
  ident: ref_53
  article-title: RFID systems in healthcare settings and activity of daily living in smart homes: A review
  publication-title: E-Health Telecommun. Syst. Netw.
  doi: 10.4236/etsn.2017.61001
– ident: ref_68
  doi: 10.1109/ICRA.2011.5980382
– ident: ref_36
  doi: 10.1007/978-3-319-16178-5_38
SSID ssj0023338
Score 2.6502273
SecondaryResourceType review_article
Snippet Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most...
SourceID doaj
pubmedcentral
proquest
gale
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage 2182
SubjectTerms Algorithms
Artificial intelligence
Augmented Reality
Automation
Biometrics
Cameras
Computer Security
Computer vision
Datasets
Deep learning
Forecasts and trends
Hand
hand-crafted
human action recognition
Human Activities
Human acts
Human behavior
Humans
Identification
Learning strategies
Machine learning
Machine vision
Neural networks
Pattern Recognition, Automated
Performance evaluation
Research methodology
Review
Sensors
Surveillance
survey
Surveys
Taxonomy
Trends
Virtual reality
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwELZQxQEOiDehBRmEBIdGdTLe2Oa2RVQcoCBoUW-WXysqoexqH1X598w42WgjkLhwjX0Yj2c838Qznxl7Re2JTaNC6UIMJUboVPrK-TJ6X4MKQYZ8Y_r9ozo91RcX5svOU19UE9bRA3eKO5IygoSZw1MzSAER8bUMRpvKOaJTy617QpltMtWnWoCZV8cjBJjUH63o1ydxlY-iTybp__Mo3olF4zrJncBzcpfd6REjn3aS3mM3Unuf3d7hEXzAPuVf8XyaexT4121N0Lx9y6f8zF3nxoXyGANW5N82y6v065CfLyjXXx1y10b-eUEwfNNmetWH7Pzk_dm7D2X_TkIZEF6tyxTRMWsvamdQz2ECPkaMyiLWEbxPrhIhqkkyYJKEfDOpfJjMtDYORAIHj9heO2_TE8a9NMIjKAPtQToXXOPjrApeeKMRq4iCvdnqz4aeRJzesvhpMZkgVdtB1QV7OUxddMwZf5t0TJswTCCy6_wBTcD2JmD_ZQIFe01baMklUZjg-s4CXBKRW9kpoiRodAOqYAfbXba9r65srZRpNEJNHH4xDKOX0dWJa9N8Q3O0IOo-QIkfd0YxyEwU_kIpXTA1MpfRosYj7eWPzOSNySbiQfH0f2hhn92q0fCporyaHLC99XKTnrGb4Wp9uVo-z-7xG5lQE6s
  priority: 102
  providerName: Directory of Open Access Journals
Title Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities
URI https://www.ncbi.nlm.nih.gov/pubmed/36850778
https://www.proquest.com/docview/2779680957
https://www.proquest.com/docview/2780763532
https://pubmed.ncbi.nlm.nih.gov/PMC9963970
https://doaj.org/article/44d343fa990c403d8754c9891aa00879
Volume 23
WOSCitedRecordID wos000942101700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: DOA
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: M~E
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Health & Medical Collection
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: 7X7
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: BENPR
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Publicly Available Content Database
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: PIMPY
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LbxMxELag5QAH3o-FEhmEBIeu6l1v1jYXlKBUIJEQlRalp5VfoZWq3ZBNKrjw25lxNmlWIC5cfLB9GGtmPA-PvyHkFX5PzHNhY22djcFC-9gk2sTOmJQLazMbXky_fhKjkZxM1LhJuNVNWeX6TgwXtass5sgPUiFULsEhEO9m32PsGoWvq00LjetkF9tmo5yLyVXAxSH-WqEJcQjtD2pMgCJiecsGBaj-Py_kLYvUrpbcMj-Hd_6X8LvkduN40t5KUu6Ra768T25twRE-IMOQ0ae98NWBHq1Li6ryLe3RY_0j_H-I-2D3HP2ynF_6n_v0ZIYpg3qf6tLRzzP05pdlQGl9SE4OB8fvP8RNu4XYgpe2iL0D_U4NS7UCdtkuN86BcWcuddwYrxNmneh6xZXPeHjgFMZ2p1IqzZnnmj8iO2VV-ieEmkwxA74dl4ZnWludGzdNrGFGSXB5WETerBlQ2AaLHFtiXBQQkyCvig2vIvJys3W2AuD426Y-cnGzATGzw0Q1_1Y0KlhkmeMZn2qwvzZj3EGkllklVaI1AvOpiLxGGShQs4EYq5sPCnAkxMgqeuBs8VzmXERkb83qolH5urjic0RebJZBWfEFRpe-WuIeyRABkAPFj1dStaEZOwEwIWREREveWodqr5TnZwEQHGJWcCvZ03-T9YzcTEEnsOQ86e6RncV86Z-TG_ZycV7PO0Fzwig7ZLc_GI2POiFBAePw1wDmxh-H49PfesMpig
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9NAEB6VgkQ58C4YCiwIBIda3XgdrxcJoRSoWjUNqKQoN7OvQCXkhDwK_VP8RmY2dpoIxK0HrvbKmvV-O4-dnW8AnlJ5YpZJG2vrbIwW2semoU3sjEmEtDa1IWP6qS07nbzXUx9W4FddC0PXKmudGBS1G1g6I99KpFRZjg6BfD38HlPXKMqu1i00ZrDY96c_MGQbv9p7i-v7LEl23nXf7MZVV4HYojMyib1DGCeGJ1qhVLYpjHNow7hLnDDG6wa3Tja9EsqnIuTxpLHNfp4rLbgXWuB3L8BF1OOSgj3ZOwvwBMZ7M_YiIRTfGtOBKzGkL9m80BrgTwOwYAGXb2cumLuda__bj7oOVyvHmrVmO-EGrPjyJlxZoFu8BQchY8FaoZSDHdZXpwblS9ZiXf0z1HfE22jXHfs4HZ340012NKQjkfEm06Vj74cUrUzLwEJ7G47OZT7rsFoOSn8XmEkVN-i7ityIVGurM-P6DWu4UTm6dDyCF_WCF7biWqeWH98KjLkIG8UcGxE8mQ8dzghG_jZom1AzH0Cc4OHBYPSlqFRMkaZOpKKv0b-wKRcOI9HUqlw1tCbiQRXBc8JcQZoLhbG6KsDAKREHWNFCZ1JkeSZkBBs1tIpKpY2LM1xF8Hj-GpURZZh06QdTGpNzYjgUKPGdGYrnMlOnAy5lHoFcwvfSpJbflMdfA-E5xuToNvN7_xbrEVze7R60i_ZeZ_8-rCW4H-l6faO5AauT0dQ_gEv2ZHI8Hj0Mu5bB5_NG_2-oqoCh
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9NAEB6VFCE48H4YCiwIBIda2XgdrxcJoZQSEbUNUWlROZl9BSohJ-RR6F_j1zHj2CYRiFsPXOOVNRt_O4-dmW8AnlB7YpJIG2rrbIgW2oempU3ojImEtDa2Rcb0w67s99OjIzVYg59VLwyVVVY6sVDUbmTpjrwZSamSFB0C2RyWZRGD7e6r8beQJkhRprUap7GAyI4__Y7h2_Rlbxu_9dMo6r45eP02LCcMhBYdk1noHUI6MjzSCiW0bWGcQ3vGXeSEMV63uHWy7ZVQPhZFTk8a2x6mqdKCe6EFvvccrKNLHkcNWB_09gYf63BPYPS34DISQvHmlK5fiS99xQIWgwL-NAdL9nC1VnPJ-HWv_M9_21W4XLrcrLM4I9dgzefX4dISEeMN2CtyGaxTNHmw_aqoapS_YB12oH8UnR_hFlp8x97PJyf-dJMdjumyZLrJdO7YuzHFMfO84Ke9CYdnsp9b0MhHub8DzMSKG_RqRWpErLXViXHDljXcqBSdPR7A8-rjZ7ZkYadhIF8zjMYIJ1mNkwAe10vHC-qRvy3aIgTVC4gtvPhhNPmclconi2MnYjHU6HnYmAuHMWpsVapaWhMloQrgGeEvI52GwlhdtmbglogdLOugmymSNBEygI0KZlmp7KbZb4wF8Kh-jGqKck8696M5rUk5cR8KlPj2AtG1zDQDgUuZBiBXsL6yqdUn-fGXggodo3V0qPndf4v1EC4g6LPdXn_nHlyM8GhS3X2rvQGN2WTu78N5ezI7nk4elEeYwaezhv8vOEeK8A
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Human+Action+Recognition%3A+A+Taxonomy-Based+Survey%2C+Updates%2C+and+Opportunities&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Morshed%2C+Md+Golam&rft.au=Sultana%2C+Tangina&rft.au=Alam%2C+Aftab&rft.au=Lee%2C+Young-Koo&rft.date=2023-02-15&rft.pub=MDPI&rft.eissn=1424-8220&rft.volume=23&rft.issue=4&rft_id=info:doi/10.3390%2Fs23042182&rft.externalDocID=PMC9963970
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon