Robust video content analysis schemes for human action recognition

Introduction: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval. Objectives: The main ob...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Science progress (1916) Jg. 104; H. 2; S. 1 - 21
Hauptverfasser: Aly, Cherry A., Abas, Fazly S., Ann, Goh H.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London, England Sage Publications, Ltd 01.04.2021
SAGE Publications
Sage Publications Ltd
Schlagworte:
ISSN:0036-8504, 2047-7163, 2047-7163
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Introduction: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval. Objectives: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms. Methods: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification. Results: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann. Conclusion: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.
AbstractList Introduction: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval. Objectives: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms. Methods: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification. Results: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann. Conclusion: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.
Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.INTRODUCTIONAction recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.OBJECTIVESThe main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.METHODSThis paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.RESULTSThe proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.CONCLUSIONEach of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.
Introduction: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval. Objectives: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms. Methods: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification. Results: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann. Conclusion: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.
Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval. The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms. This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification. The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann. Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.
Author Aly, Cherry A.
Ann, Goh H.
Abas, Fazly S.
Author_xml – sequence: 1
  givenname: Cherry A.
  surname: Aly
  fullname: Aly, Cherry A.
– sequence: 2
  givenname: Fazly S.
  surname: Abas
  fullname: Abas, Fazly S.
– sequence: 3
  givenname: Goh H.
  surname: Ann
  fullname: Ann, Goh H.
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33913378$$D View this record in MEDLINE/PubMed
BookMark eNp9kU9rFTEUxYNU7Gv1A7hQBty4mXqTTCaZldTiPygIouuQSe68l8dMUpOZQr-9GV5btUJXCdzfOTk594QchRiQkJcUziiV8h0Ab5WAhlEKIBoFT8iGQSNrSVt-RDbrvF6BY3KS8x6ACtqqZ-SY845yLtWGfPge-yXP1bV3GCsbw4xhrkww4032ucp2hxPmaoip2i2TCZWxs4-hSmjjNvj1_pw8HcyY8cXteUp-fvr44-JLffnt89eL88vaNtBA3ViHAjqOinVcdhJcT3vpuo5zlGLgQyOoNYOS2CrktnWd65VqOfY9Y85RfkreH3yvln5CZ0vQZEZ9lfxk0o2Oxut_J8Hv9DZeawqNEMBkcXh765DirwXzrCefLY6jCRiXrJmgneyUgragbx6g-7ikUstKCSpYK1so1Ou_I91nueu3APIA2BRzTjho62eztlYS-rFE0-sm9X-bLEr6QHln_pjm7KDJZot_Aj8meHUQ7PMc0_0LTELDoXzhNw2qtSQ
CitedBy_id crossref_primary_10_1016_j_imavis_2024_105234
crossref_primary_10_32604_cmc_2023_035214
crossref_primary_10_1109_TIP_2022_3228156
Cites_doi 10.1109/TCSVT.2013.2242594
10.1007/s11263-005-1838-7
10.1109/TCYB.2015.2399172
10.1109/TMM.2017.2666540
10.1109/JSEN.2019.2903645
10.1016/j.aeue.2019.05.023
10.1016/j.cogsys.2019.12.004
10.1109/TPAMI.2017.2691768
10.1109/TCSVT.2017.2665359
10.1109/TIP.2017.2788196
10.1109/34.910878
10.1109/TIP.2013.2256919
10.1016/j.compeleceng.2018.01.037
10.1007/s00371-018-1560-4
10.1109/TIP.2020.2965299
10.1109/TPAMI.2012.59
10.1109/TPAMI.2016.2537337
ContentType Journal Article
Copyright The Author(s) 2021
2021. This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License ( https://creativecommons.org/licenses/by-nc/4.0/ ) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages ( https://us.sagepub.com/en-us/nam/open-access-at-sage ). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
The Author(s) 2021 2021 SAGE Publications
Copyright_xml – notice: The Author(s) 2021
– notice: 2021. This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License ( https://creativecommons.org/licenses/by-nc/4.0/ ) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages ( https://us.sagepub.com/en-us/nam/open-access-at-sage ). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: The Author(s) 2021 2021 SAGE Publications
DBID AFRWT
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
8FD
F28
FR3
JQ2
K9.
7X8
5PM
DOI 10.1177/00368504211005480
DatabaseName Sage Journals GOLD Open Access 2024
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
ProQuest Computer Science Collection
ProQuest Health & Medical Complete (Alumni)
MEDLINE - Academic
PubMed Central (Full Participant titles)
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
ProQuest Health & Medical Complete (Alumni)
Engineering Research Database
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
ProQuest Computer Science Collection
MEDLINE - Academic
DatabaseTitleList ProQuest Health & Medical Complete (Alumni)
MEDLINE - Academic

MEDLINE
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Sciences (General)
EISSN 2047-7163
EndPage 21
ExternalDocumentID PMC10455027
33913378
10_1177_00368504211005480
10.1177_00368504211005480
27043091
Genre Journal Article
GroupedDBID ---
-~X
0R~
123
54M
79B
7X7
8FI
AAFWJ
AALJN
AASGM
ABBHK
ABDBF
ABIWW
ABJCF
ABPVG
ABQXT
ACDXX
ACHEB
ACIWK
ACPRK
ACPRP
ACROE
ACUHS
ADBBV
ADMLS
ADOGD
AEDFJ
AEGXH
AENEX
AETEA
AEUPB
AEWDL
AFCOW
AFKRG
AFPKN
AFRAH
AFRWT
AJUZI
AKJNG
ALIPV
ALMA_UNASSIGNED_HOLDINGS
APEBS
ARAPS
ARTOV
ATCPS
BBNVY
BDDNI
BENPR
BGLVJ
BKSAR
BPHCQ
CS3
DU5
EBD
EBS
EHMNL
EMOBN
F5P
FYUFA
GROUPED_DOAJ
H13
HCIFZ
HH5
HHL
IPSME
J8X
JENOY
JFNAL
JPL
JST
K7-
KB.
L7B
M0K
M7P
OK1
PATMY
PIMPY
PQQKQ
PROAC
ROL
RPM
RWL
RXW
SA0
SAUOL
SCDPB
SCNPE
SFC
SV3
TAE
UQL
YNT
53G
7X2
7XC
88E
8CJ
8FE
8FG
8FH
8FJ
AADUE
AARIX
ABCQX
ABKRH
ABRHV
ABUWG
ABXSQ
ACLDX
ACOFE
ADEBD
ADULT
AEUYN
AEXNY
AFKRA
AFUIA
AGNHF
AMPZH
AORZM
BHPHI
BVXVI
CAKTK
CCPQU
CFDXU
CZ9
D1I
D1J
D1K
DOPDO
EJD
HMCUK
IL9
JAAYA
JBMMH
JHFFW
JKQEH
JLEZI
JLXEF
K6-
K6V
KC.
L6V
LK5
LK8
M1P
M7R
M7S
P62
PCBAR
PDBOC
PHGZM
PHGZT
PSQYO
PTHSS
PYCSY
UKHRP
ZPPRI
ZRKOI
AAYXX
AFFHD
CITATION
PJZUB
PPXIY
PQGLB
31X
3V.
AATBZ
ACSIQ
ACUIR
AEWHI
AIOMO
CGR
COXJP
CUY
CVF
DV7
ECM
EIF
GROUPED_SAGE_PREMIER_JOURNAL_COLLECTION
IAO
IEA
IGS
IOF
JSODD
M4V
NPM
RIG
SFK
SFT
SGV
SPP
8FD
AGMQJ
F28
FR3
JQ2
K9.
7X8
5PM
ID FETCH-LOGICAL-c4040-4cde5093e82937970db1b7d9933e75f3f451caf87e68e3c6d9db8863ebb22dd13
ISSN 0036-8504
2047-7163
IngestDate Thu Aug 21 18:36:39 EDT 2025
Sun Nov 09 14:23:15 EST 2025
Sat Nov 29 14:59:31 EST 2025
Wed Feb 19 02:25:48 EST 2025
Sat Nov 29 08:15:59 EST 2025
Tue Nov 18 22:44:05 EST 2025
Tue Jun 17 22:36:13 EDT 2025
Thu Jul 03 21:49:42 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Action recognition
space-time interest points
histogram of optical flow
bag of words
binary robust invariant scalable keypoints
histogram of oriented gradient
video analysis
Language English
License This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c4040-4cde5093e82937970db1b7d9933e75f3f451caf87e68e3c6d9db8863ebb22dd13
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-8744-8793
OpenAccessLink http://dx.doi.org/10.1177/00368504211005480
PMID 33913378
PQID 2551526760
PQPubID 1586336
PageCount 21
ParticipantIDs pubmedcentral_primary_oai_pubmedcentral_nih_gov_10455027
proquest_miscellaneous_2519798806
proquest_journals_2551526760
pubmed_primary_33913378
crossref_citationtrail_10_1177_00368504211005480
crossref_primary_10_1177_00368504211005480
sage_journals_10_1177_00368504211005480
jstor_primary_27043091
PublicationCentury 2000
PublicationDate 20210400
PublicationDateYYYYMMDD 2021-04-01
PublicationDate_xml – month: 4
  year: 2021
  text: 20210400
PublicationDecade 2020
PublicationPlace London, England
PublicationPlace_xml – name: London, England
– name: England
– name: London
– name: Sage UK: London, England
PublicationTitle Science progress (1916)
PublicationTitleAlternate Sci Prog
PublicationYear 2021
Publisher Sage Publications, Ltd
SAGE Publications
Sage Publications Ltd
Publisher_xml – name: Sage Publications, Ltd
– name: SAGE Publications
– name: Sage Publications Ltd
References Bobick, Davis 2001; 23
Dhiman, Vishwakarma 2019; 19
Han, He, Qian 2013; 23
Liu, Su, Nie 2017; 39
Ji, Xu, Yang 2013; 35
Dhiman, Vishwakarma 2020; 29
Laptev 2005; 64
Shi, Tian, Wang 2017; 19
Vishwakarma, Singh 2019; 107
Rahmani, Mian, Shah 2018; 40
Vishwakarma 2020; 61
Tian, Kong, Ruan 2018; 27
Nazir, Yousaf, Velastin 2018; 72
Junwei, Xiang, Xintao 2013; 22
Vishwakarma, Dhiman 2019; 35
Liu, Shao, Li 2016; 46
Xu, Jiang, Sun 2017; 27
bibr10-00368504211005480
Tan S (bibr19-00368504211005480)
Gorelick L (bibr29-00368504211005480)
Banerjee B (bibr31-00368504211005480)
Silambarasi R (bibr39-00368504211005480)
bibr20-00368504211005480
Marsza M (bibr27-00368504211005480)
Dhiman C (bibr23-00368504211005480) 2019; 19
Xiao X (bibr37-00368504211005480)
Laptev I (bibr7-00368504211005480)
bibr16-00368504211005480
bibr9-00368504211005480
Gammulle H (bibr35-00368504211005480)
bibr36-00368504211005480
Liu J (bibr6-00368504211005480)
Sekma M (bibr13-00368504211005480)
bibr33-00368504211005480
Xiang Y (bibr30-00368504211005480)
Klaser A (bibr8-00368504211005480)
bibr17-00368504211005480
bibr14-00368504211005480
bibr1-00368504211005480
bibr4-00368504211005480
Scovanner P (bibr5-00368504211005480)
Zou Y (bibr22-00368504211005480)
bibr24-00368504211005480
bibr32-00368504211005480
Zhang L (bibr18-00368504211005480)
Wu P-H (bibr26-00368504211005480)
bibr34-00368504211005480
Kardaris N (bibr15-00368504211005480)
bibr38-00368504211005480
bibr25-00368504211005480
Schuldt C (bibr28-00368504211005480)
bibr12-00368504211005480
Dollar P (bibr3-00368504211005480)
bibr11-00368504211005480
Subetha T (bibr2-00368504211005480)
Camarena F (bibr21-00368504211005480)
References_xml – volume: 40
  start-page: 667
  issue: 3
  year: 2018
  end-page: 681
  article-title: Learning a deep model for human action recognition from novel viewpoints
  publication-title: IEEE Trans Pattern Anal Mach Intell
– volume: 23
  start-page: 257
  issue: 3
  year: 2001
  end-page: 267
  article-title: The recognition of human movement using temporal templates
  publication-title: IEEE Trans Pattern Anal Mach Intell
– volume: 35
  start-page: 1595
  year: 2019
  end-page: 1613
  article-title: A unified model for human activity recognition using spatial
  publication-title: Vis Comput
– volume: 39
  start-page: 102
  issue: 1
  year: 2017
  end-page: 114
  article-title: Hierarchical clustering multi-task learning for joint human action grouping and recognition
  publication-title: IEEE Trans Pattern Anal Mach Intell
– volume: 64
  start-page: 107
  issue: 2
  year: 2005
  end-page: 123
  article-title: On space-time interest points
  publication-title: Int J Comp Vis
– volume: 35
  start-page: 221
  issue: 1
  year: 2013
  end-page: 231
  article-title: 3D convolutional neural networks for human action recognition
  publication-title: IEEE Trans Pattern Anal Mach Intell
– volume: 22
  start-page: 2723
  issue: 7
  year: 2013
  end-page: 2736
  article-title: Representing and retrieving video shots in human-centric brain imaging space
  publication-title: IEEE Trans Image Process
– volume: 72
  start-page: 660
  year: 2018
  end-page: 669
  article-title: Evaluating a bag-of-visual features approach using spatio-temporal features for action recognition
  publication-title: Comput Electr Eng
– volume: 19
  start-page: 1510
  issue: 7
  year: 2017
  end-page: 1520
  article-title: Sequential deep trajectory descriptor for action recognition with three-stream CNN
  publication-title: IEEE Transactions on Multimedia
– volume: 27
  start-page: 1748
  issue: 4
  year: 2018
  end-page: 1762
  article-title: Hierarchical and spatio-temporal sparse representation for human action recognition
  publication-title: IEEE Trans Image Process
– volume: 46
  start-page: 158
  issue: 1
  year: 2016
  end-page: 170
  article-title: Learning spatio-temporal representations for action recognition: a genetic programming approach
  publication-title: IEEE Trans Cybern
– volume: 19
  issue: 13
  year: 2019
  article-title: A robust framework for abnormal human action
  publication-title: IEEE Sens J
– volume: 61
  start-page: 1
  year: 2020
  end-page: 13
  article-title: A two-fold transformation model for human action recognition
  publication-title: Cognit Syst Res
– volume: 27
  start-page: 567
  issue: 3
  year: 2017
  end-page: 576
  article-title: Two-stream dictionary learning architecture for action recognition
  publication-title: IEEE Trans Circ Syst Video Technol
– volume: 107
  start-page: 513
  year: 2019
  end-page: 520
  article-title: A visual cognizance based multi-resolution descriptor for human action
  publication-title: Int J Electron Commun
– volume: 23
  start-page: 2009
  issue: 12
  year: 2013
  end-page: 2021
  article-title: An object-oriented visual saliency detection framework based on sparse coding representations
  publication-title: IEEE Trans Circ Syst Video Technol
– volume: 29
  start-page: 3835
  year: 2020
  end-page: 3844
  article-title: View-invariant deep architecture for human
  publication-title: IEEE Trans Image Process
– volume-title: International conference on information communication and embedded systems (ICICES)
  ident: bibr2-00368504211005480
– ident: bibr11-00368504211005480
  doi: 10.1109/TCSVT.2013.2242594
– volume-title: IEEE conference on computer vision & pattern recognition
  ident: bibr27-00368504211005480
– ident: bibr4-00368504211005480
  doi: 10.1007/s11263-005-1838-7
– ident: bibr14-00368504211005480
  doi: 10.1109/TCYB.2015.2399172
– start-page: 1
  volume-title: IEEE conference on computer vision and pattern recognition
  ident: bibr7-00368504211005480
– start-page: 1622
  volume-title: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  ident: bibr18-00368504211005480
– ident: bibr17-00368504211005480
  doi: 10.1109/TMM.2017.2666540
– volume: 19
  issue: 13
  year: 2019
  ident: bibr23-00368504211005480
  publication-title: IEEE Sens J
  doi: 10.1109/JSEN.2019.2903645
– volume-title: International workshop on visual surveillance and performance evaluation of tracking and surveillance
  ident: bibr3-00368504211005480
– volume-title: 12th International conference on signal-image technology & internet-based systems (SITIS)
  ident: bibr30-00368504211005480
– volume-title: Tenth IEEE international conference on computer vision (ICCV)
  ident: bibr29-00368504211005480
– volume-title: IEEE international conference on acoustics, speech and signal processing (ICASSP)
  ident: bibr31-00368504211005480
– volume-title: International conference on communication and signal processing (ICCSP)
  ident: bibr39-00368504211005480
– ident: bibr25-00368504211005480
  doi: 10.1016/j.aeue.2019.05.023
– ident: bibr33-00368504211005480
  doi: 10.1016/j.cogsys.2019.12.004
– volume-title: International joint conference on neural networks
  ident: bibr19-00368504211005480
– ident: bibr1-00368504211005480
– ident: bibr36-00368504211005480
  doi: 10.1109/TPAMI.2017.2691768
– volume-title: 2019 7th International workshop on biometrics and forensics (IWBF)
  ident: bibr21-00368504211005480
– ident: bibr16-00368504211005480
  doi: 10.1109/TCSVT.2017.2665359
– ident: bibr38-00368504211005480
  doi: 10.1109/TIP.2017.2788196
– start-page: 995
  volume-title: Proceedings of the British machine vision conference
  ident: bibr8-00368504211005480
– ident: bibr9-00368504211005480
  doi: 10.1109/34.910878
– ident: bibr12-00368504211005480
  doi: 10.1109/TIP.2013.2256919
– start-page: 3061
  volume-title: IEEE international conference on image processing (ICIP)
  ident: bibr15-00368504211005480
– ident: bibr20-00368504211005480
  doi: 10.1016/j.compeleceng.2018.01.037
– volume-title: Proceedings of the 17th international conference on pattern recognition
  ident: bibr28-00368504211005480
– start-page: 14
  volume-title: 15th International conference on intelligent systems design and applications, ISDA 2015
  ident: bibr13-00368504211005480
– start-page: 1
  volume-title: 2009 IEEE conference computer vision and pattern recognition
  ident: bibr6-00368504211005480
– volume-title: IEEE winter conference on applications of computer vision (WACV)
  ident: bibr35-00368504211005480
– volume-title: IEEE international conference on image processing (ICIP)
  ident: bibr37-00368504211005480
– ident: bibr34-00368504211005480
  doi: 10.1007/s00371-018-1560-4
– ident: bibr24-00368504211005480
  doi: 10.1109/TIP.2020.2965299
– volume-title: IEEE international conference on multimedia and expo (ICME)
  ident: bibr22-00368504211005480
– ident: bibr10-00368504211005480
  doi: 10.1109/TPAMI.2012.59
– start-page: 3465
  volume-title: IEEE international conference on acoustics, speech and signal processing
  ident: bibr26-00368504211005480
– ident: bibr32-00368504211005480
  doi: 10.1109/TPAMI.2016.2537337
– start-page: 357
  volume-title: Proceedings of the 15th international conference on multimedia
  ident: bibr5-00368504211005480
SSID ssj0015168
Score 2.228724
Snippet Introduction: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in...
Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical...
Introduction: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in...
SourceID pubmedcentral
proquest
pubmed
crossref
sage
jstor
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1
SubjectTerms Algorithms
Classification
Content analysis
Datasets
Feature extraction
Human Activities
Human activity recognition
Human motion
Human performance
Humans
Image analysis
Image processing
Image Processing, Computer-Assisted - methods
Learning algorithms
Machine learning
Object recognition
Pattern Recognition, Automated - methods
Robustness
Title Robust video content analysis schemes for human action recognition
URI https://www.jstor.org/stable/27043091
https://journals.sagepub.com/doi/full/10.1177/00368504211005480
https://www.ncbi.nlm.nih.gov/pubmed/33913378
https://www.proquest.com/docview/2551526760
https://www.proquest.com/docview/2519798806
https://pubmed.ncbi.nlm.nih.gov/PMC10455027
Volume 104
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2047-7163
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0015168
  issn: 0036-8504
  databaseCode: DOA
  dateStart: 20200101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3Pb9MwFLbY4MAFMWAQGJOREL-qTHXcxPaxTCscpoJQJ_UWxY6rVZrSsrZo46_nPdtJ2g4mOHCJKsdxmrzPznv25-8R8lrzCZ9oncUFOAtxLy2TuCiMiXFPVVHYtJto6ZJNiOFQjsfqa9hCsHDpBERVyasrNf-vpoYyMDZunf0HczeNQgH8BqPDEcwOx78y_LeZXi2WHdxfN3NMdM8iD9ojEMxa1GdCdqHPzxeShTdMomCn4LDWfd_xuHBURGEnhaF4O4Pgk1Ufo8jjdTs32ocPZGdQ_Ly4bqdX-1XV-TQ7DzsiwmRDwtY4Kts8Ik_Ua7lHbnhFdePU5xM-sq4sQSkICMr4xpAbqkzb0PfmUO4Wk7FJbBHj1C5q07XfrXqtfvglH5ydnuajk_Hozfx7jBnFcOU9pFfZIXcTkSpWh9thhSllmWwEmeEGYcXbiXFt33PDZ_G01d8FJDd5tWvkQOevjB6SByHQoH0PkD1yx1aPyF4w54K-C3rj7x-Tjx4x1CGGBsTQGjE0IIYCYqhDDPWIoWuIeULOBiej489xSK0Rmx5ySHumhH6ouJXg7gkluqVmWpTgrHIrUui-vZSZYiKFzaTlJitVqaXMuNU6ScqS8X2yW80q-4xQ3KqcmFIbLnB2QMi0EEbZTLEJphDhEenWLy83QXce059c5KyWmt9-3xH50Fwy96Irt1XedxZpaiYCdewUi8hBbaI89M9FDhE0eKyZyOC6V81pGFJxnayo7GyFdZhCGb9uFpGn3qJN45wrxrmQEZEbtm4qoFz75plqeu5k2wH0KYx9IiJvERbtf_rjoz2__QlekPttHz0gu8vLlX1J7pkfy-ni8pDsiLE8dKD_BTqyuC0
linkProvider Directory of Open Access Journals
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Robust+video+content+analysis+schemes+for+human+action+recognition&rft.jtitle=Science+progress+%281916%29&rft.au=Aly%2C+Cherry+A&rft.au=Abas+Fazly+S&rft.au=Ann+Goh+H&rft.date=2021-04-01&rft.pub=Sage+Publications+Ltd&rft.issn=0036-8504&rft.eissn=2047-7163&rft.volume=104&rft.issue=2&rft_id=info:doi/10.1177%2F00368504211005480&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0036-8504&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0036-8504&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0036-8504&client=summon