Human Action Recognition by Semilatent Topic Models

We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel "bag-of-words" representation, where each frame corresponds to a "word". Our models differ from previous latent topic models for visual recog...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence Vol. 31; no. 10; pp. 1762 - 1774
Main Authors: Yang Wang, Mori, G.
Format: Journal Article
Language:English
Published: United States IEEE 01.10.2009
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0162-8828, 1939-3539
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel "bag-of-words" representation, where each frame corresponds to a "word". Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; second, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Second, it alleviates the issue of how to choose the appropriate number of latent topics. Third, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different data sets. Our results are either comparable to, or significantly better than previously published results on these data sets.
AbstractList First of all, the training is much easier due to the decoupling of the model parameters.
We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel "bag-of-words" representation, where each frame corresponds to a "word." Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; second, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Second, it alleviates the issue of how to choose the appropriate number of latent topics. Third, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different data sets. Our results are either comparable to, or significantly better than previously published results on these data sets.
We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel "bag-of-words" representation, where each frame corresponds to a "word." Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; second, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Second, it alleviates the issue of how to choose the appropriate number of latent topics. Third, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different data sets. Our results are either comparable to, or significantly better than previously published results on these data sets.We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel "bag-of-words" representation, where each frame corresponds to a "word." Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; second, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Second, it alleviates the issue of how to choose the appropriate number of latent topics. Third, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different data sets. Our results are either comparable to, or significantly better than previously published results on these data sets.
Author Mori, G.
Yang Wang
Author_xml – sequence: 1
  surname: Yang Wang
  fullname: Yang Wang
  organization: Sch. of Comput. Sci., Simon Fraser Univ., Burnaby, BC, Canada
– sequence: 2
  givenname: G.
  surname: Mori
  fullname: Mori, G.
  organization: Sch. of Comput. Sci., Simon Fraser Univ., Burnaby, BC, Canada
BackLink https://www.ncbi.nlm.nih.gov/pubmed/19696448$$D View this record in MEDLINE/PubMed
BookMark eNqF0U1v1DAQBmALFdFt4cgJCUU90FMWOx5_zHFVAa3UCgTL2XKcCXKVxEucHPrvyXZLD5UKp5nDMyPNvCfsaEgDMfZW8LUQHD9uv21urtYV57gG-YKtBEospZJ4xFZc6Kq0trLH7CTnW84FKC5fsWOBGjWAXTF5Ofd-KDZhimkovlNIv4Z439d3xQ_qY-cnGqZim3YxFDepoS6_Zi9b32V681BP2c_Pn7YXl-X11y9XF5vrMoA2U9lWViLWXipFRoaGG6MU1kYYQW0toIJAXjUhtAaC9FxVtQJtyQA02La1PGXnh727Mf2eKU-ujzlQ1_mB0pyd1WgBueaL_PBPqY1ChOr_UAKiAGkWePYE3qZ5HJZznVUGtJKVWND7BzTXPTVuN8bej3fu73sXIA8gjCnnkVoX4uT3751GHzsnuNuH6O5DdPsQHchlqnwy9bj4Gf_u4CMRPVowVoEB-QeVkqOe
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1155_2018_6598025
crossref_primary_10_1016_j_patcog_2010_06_018
crossref_primary_10_1016_j_neucom_2012_03_003
crossref_primary_10_1109_TIP_2019_2922100
crossref_primary_10_1109_TGRS_2019_2939001
crossref_primary_10_1109_TCSVT_2019_2890829
crossref_primary_10_1109_TPAMI_2013_128
crossref_primary_10_1587_transinf_E96_D_1783
crossref_primary_10_1016_j_patrec_2011_03_006
crossref_primary_10_1016_j_cviu_2011_09_006
crossref_primary_10_1016_j_cviu_2011_09_007
crossref_primary_10_1177_0165551519877049
crossref_primary_10_1016_j_neucom_2011_10_033
crossref_primary_10_1061__ASCE_CP_1943_5487_0000237
crossref_primary_10_1016_j_cviu_2015_02_012
crossref_primary_10_1016_j_asoc_2017_10_021
crossref_primary_10_1109_TNNLS_2020_3027539
crossref_primary_10_1016_j_pmcj_2014_05_007
crossref_primary_10_1109_TCSVT_2018_2816960
crossref_primary_10_1155_2013_506752
crossref_primary_10_1016_j_eswa_2018_08_029
crossref_primary_10_1049_iet_cvi_2012_0121
crossref_primary_10_2478_jee_2019_0077
crossref_primary_10_1016_j_patrec_2014_07_014
crossref_primary_10_1155_2014_810185
crossref_primary_10_3141_2387_02
crossref_primary_10_1016_j_patcog_2012_10_004
crossref_primary_10_1109_TCDS_2016_2577044
crossref_primary_10_1109_TCSVT_2012_2199390
crossref_primary_10_1016_j_patrec_2012_01_007
crossref_primary_10_1007_s10115_017_1095_4
crossref_primary_10_1080_01691864_2019_1627243
crossref_primary_10_1109_TPAMI_2014_2313122
crossref_primary_10_1016_j_sigpro_2025_110028
crossref_primary_10_1007_s11263_012_0596_6
crossref_primary_10_1016_j_aei_2013_09_001
crossref_primary_10_1109_TPAMI_2014_2339851
crossref_primary_10_1109_ACCESS_2016_2614525
crossref_primary_10_1016_j_aeue_2015_12_016
crossref_primary_10_1007_s11276_019_02234_w
crossref_primary_10_1007_s11432_013_4794_9
crossref_primary_10_1016_j_imavis_2009_11_014
crossref_primary_10_3390_su9040497
crossref_primary_10_1007_s11042_012_0993_4
crossref_primary_10_7717_peerj_cs_923
crossref_primary_10_1109_TPAMI_2011_38
crossref_primary_10_1016_j_patrec_2014_07_011
crossref_primary_10_1016_j_ins_2016_06_016
crossref_primary_10_1016_j_neucom_2013_01_012
crossref_primary_10_1007_s11042_014_1924_3
crossref_primary_10_1007_s11063_018_9852_2
crossref_primary_10_1109_TII_2011_2172452
crossref_primary_10_1007_s00138_013_0589_7
crossref_primary_10_1007_s11263_016_0896_3
crossref_primary_10_1109_TPAMI_2015_2414422
crossref_primary_10_1016_j_patrec_2012_10_019
crossref_primary_10_1111_coin_12429
crossref_primary_10_1007_s10462_020_09904_8
crossref_primary_10_1162_neco_a_01148
crossref_primary_10_3141_2393_09
crossref_primary_10_1007_s00530_019_00629_5
crossref_primary_10_1049_iet_cvi_2015_0420
crossref_primary_10_1007_s11263_015_0833_x
crossref_primary_10_1049_iet_its_2014_0257
crossref_primary_10_1109_TPAMI_2011_253
crossref_primary_10_1016_j_robot_2015_11_013
crossref_primary_10_1007_s11042_016_3630_9
crossref_primary_10_1016_j_patrec_2013_01_008
crossref_primary_10_1016_j_neucom_2012_12_077
crossref_primary_10_1007_s00138_016_0768_4
crossref_primary_10_3390_electronics8101169
crossref_primary_10_1007_s11042_016_4197_1
crossref_primary_10_1080_01691864_2016_1172506
crossref_primary_10_1109_TIP_2013_2292550
crossref_primary_10_1007_s11263_011_0510_7
crossref_primary_10_1016_j_neucom_2012_06_011
crossref_primary_10_1155_2014_723213
crossref_primary_10_1109_TMM_2014_2326734
crossref_primary_10_1016_j_inffus_2014_01_002
crossref_primary_10_1109_TCSVT_2014_2382984
crossref_primary_10_1016_j_patcog_2012_09_027
crossref_primary_10_1016_j_patcog_2016_07_024
crossref_primary_10_1016_j_neucom_2012_02_002
crossref_primary_10_1007_s00521_022_07937_4
crossref_primary_10_1007_s00371_011_0652_1
crossref_primary_10_1145_2340416_2340419
crossref_primary_10_1016_j_knosys_2021_107051
crossref_primary_10_1049_iet_cvi_2013_0306
crossref_primary_10_1109_TPAMI_2011_157
crossref_primary_10_1007_s11042_012_1084_2
crossref_primary_10_1016_j_patcog_2012_03_010
crossref_primary_10_1016_j_patcog_2017_07_013
crossref_primary_10_1007_s11042_017_5588_7
crossref_primary_10_1109_TFUZZ_2014_2370678
crossref_primary_10_1016_j_imavis_2011_12_006
crossref_primary_10_1016_j_neucom_2012_09_008
crossref_primary_10_1109_JIOT_2020_2985082
crossref_primary_10_1080_08839514_2012_629540
crossref_primary_10_1007_s00371_018_1560_4
crossref_primary_10_1016_j_sigpro_2012_07_017
crossref_primary_10_1016_j_eswa_2015_04_039
crossref_primary_10_1016_j_image_2020_115802
crossref_primary_10_1016_j_trc_2013_04_007
crossref_primary_10_1016_j_eswa_2017_02_026
crossref_primary_10_1109_TGRS_2012_2205579
crossref_primary_10_5402_2012_376804
crossref_primary_10_1016_j_patcog_2015_09_017
crossref_primary_10_1177_0278364917690592
crossref_primary_10_1109_TIM_2019_2925410
crossref_primary_10_1016_j_patcog_2012_04_024
crossref_primary_10_1007_s10462_011_9232_z
crossref_primary_10_1016_j_patcog_2012_10_016
crossref_primary_10_1145_2597627
crossref_primary_10_1155_2013_816836
crossref_primary_10_1080_01621459_2017_1285773
crossref_primary_10_1007_s00530_015_0474_5
crossref_primary_10_1109_TCSVT_2011_2135290
crossref_primary_10_1016_j_patcog_2017_06_035
crossref_primary_10_1007_s11042_017_4711_0
crossref_primary_10_3390_rs8030231
crossref_primary_10_1016_j_imavis_2012_08_006
crossref_primary_10_3389_frobt_2020_503452
crossref_primary_10_1155_2014_238234
crossref_primary_10_1109_TFUZZ_2013_2242894
crossref_primary_10_1371_journal_pone_0124640
crossref_primary_10_1016_j_aeue_2019_05_023
Cites_doi 10.1109/CVPR.2008.4587756
10.1007/s11263-007-0122-4
10.1109/CVPR.2006.326
10.1109/34.868681
10.5244/C.20.127
10.1109/ICCV.2007.4408988
10.1007/s11263-006-9794-4
10.1007/s11263-006-4329-6
10.1109/ICCV.2007.4409049
10.1109/34.910878
10.1109/CVPR.2008.4587727
10.1093/bioinformatics/bti515
10.1109/ICCV.2007.4409105
10.1023/A:1007975200487
10.1109/ICCV.2003.1238378
10.1007/11744085_40
10.1007/978-3-540-75703-0_17
10.1109/CVPR.2007.383074
10.1109/34.643892
10.1109/CVPR.2005.16
10.1109/CVPR.2007.383332
10.1109/CVPR.2008.4587735
10.1016/j.cviu.2004.02.004
10.1109/ICCV.2005.239
10.1109/TPAMI.2006.79
10.1109/CVPR.2005.328
10.5555/944919.944937
10.1109/CVPR.2008.4587723
10.1109/ICCV.2005.10
10.1109/CVPR.2006.132
10.1109/CVPR.2007.383132
10.1109/CVPR.2008.4587730
10.1109/ICCV.2005.59
10.1109/ICCV.2005.142
10.1109/ICCV.2003.1238420
10.1109/CVPR.1992.223161
10.1109/VSPETS.2005.1570899
10.1109/TDPVT.2002.1024148
10.1109/ICCV.2005.85
10.1109/ICCV.2005.77
10.1109/CVPR.2008.4587721
10.1109/ICCV.2005.28
10.7551/mitpress/7503.003.0026
10.1109/ICPR.2004.1334462
10.1109/CVPR.2007.383168
10.1145/312624.312649
10.1016/j.imavis.2008.02.008
10.1007/3-540-47969-4_42
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2009
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2009
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
F28
FR3
7X8
DOI 10.1109/TPAMI.2009.43
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
Engineering Research Database
ANTE: Abstracts in New Technology & Engineering
MEDLINE - Academic
DatabaseTitleList Technology Research Database
MEDLINE

Technology Research Database
MEDLINE - Academic
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1939-3539
EndPage 1774
ExternalDocumentID 2295178451
19696448
10_1109_TPAMI_2009_43
4785474
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
9M8
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETEA
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
FA8
HZ~
H~9
IBMZZ
ICLAB
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RXW
RZB
TAE
TN5
UHB
VH1
XJT
~02
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
F28
FR3
7X8
ID FETCH-LOGICAL-c467t-f28399ba355e73cd077559b7171efb1424cea5dccf74c3a052b5468e744d9ffb3
IEDL.DBID RIE
ISICitedReferencesCount 217
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000268996500004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
IngestDate Sat Sep 27 19:38:57 EDT 2025
Thu Oct 02 18:04:08 EDT 2025
Sun Nov 09 10:25:22 EST 2025
Mon Jun 30 06:10:00 EDT 2025
Mon Jul 21 06:07:25 EDT 2025
Sat Nov 29 08:12:47 EST 2025
Tue Nov 18 22:11:23 EST 2025
Tue Aug 26 16:47:39 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c467t-f28399ba355e73cd077559b7171efb1424cea5dccf74c3a052b5468e744d9ffb3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
PMID 19696448
PQID 857465321
PQPubID 23500
PageCount 13
ParticipantIDs proquest_miscellaneous_67599420
proquest_miscellaneous_34991437
ieee_primary_4785474
pubmed_primary_19696448
crossref_citationtrail_10_1109_TPAMI_2009_43
proquest_miscellaneous_869849060
proquest_journals_857465321
crossref_primary_10_1109_TPAMI_2009_43
PublicationCentury 2000
PublicationDate 2009-10-01
PublicationDateYYYYMMDD 2009-10-01
PublicationDate_xml – month: 10
  year: 2009
  text: 2009-10-01
  day: 01
PublicationDecade 2000
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2009
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref12
ref15
ref14
ref53
ref52
ref11
ref10
ref54
ref17
ref16
ref19
ref18
Lucas (ref49)
ref46
ref45
ref48
ref47
ref42
ref41
ref44
ref43
ref8
ref7
ref9
Huang (ref51) 2009
ref4
ref6
ref5
ref40
Blei (ref22) 2008; 20
ref35
ref34
ref37
ref36
ref31
ref30
ref33
ref32
ref2
ref1
ref39
ref38
Little (ref3) 1998; 1
Blei (ref13) 2006; 18
ref24
ref23
ref26
ref25
ref20
ref21
ref28
ref27
ref29
Minka (ref50) 2000
References_xml – ident: ref46
  doi: 10.1109/CVPR.2008.4587756
– ident: ref47
  doi: 10.1007/s11263-007-0122-4
– ident: ref18
  doi: 10.1109/CVPR.2006.326
– ident: ref1
  doi: 10.1109/34.868681
– ident: ref20
  doi: 10.5244/C.20.127
– ident: ref29
  doi: 10.1109/ICCV.2007.4408988
– ident: ref54
  doi: 10.1007/s11263-006-9794-4
– ident: ref11
  doi: 10.1007/s11263-006-4329-6
– ident: ref42
  doi: 10.1109/ICCV.2007.4409049
– year: 2009
  ident: ref51
  article-title: Fitting a Hierarchical Logistic Normal Distribution
– ident: ref27
  doi: 10.1109/34.910878
– ident: ref31
  doi: 10.1109/CVPR.2008.4587727
– ident: ref23
  doi: 10.1093/bioinformatics/bti515
– ident: ref45
  doi: 10.1109/ICCV.2007.4409105
– ident: ref4
  doi: 10.1023/A:1007975200487
– ident: ref38
  doi: 10.1109/ICCV.2003.1238378
– ident: ref15
  doi: 10.1007/11744085_40
– ident: ref24
  doi: 10.1007/978-3-540-75703-0_17
– ident: ref35
  doi: 10.1109/CVPR.2007.383074
– volume: 20
  volume-title: Advances in Neural Information Processing Systems
  year: 2008
  ident: ref22
  article-title: Supervised Topic Models
– ident: ref10
  doi: 10.1109/34.643892
– ident: ref16
  doi: 10.1109/CVPR.2005.16
– ident: ref48
  doi: 10.1109/CVPR.2007.383332
– ident: ref53
  doi: 10.1109/CVPR.2008.4587735
– ident: ref33
  doi: 10.1016/j.cviu.2004.02.004
– ident: ref7
  doi: 10.1109/ICCV.2005.239
– ident: ref6
  doi: 10.1109/TPAMI.2006.79
– ident: ref28
  doi: 10.1109/CVPR.2005.328
– ident: ref12
  doi: 10.5555/944919.944937
– ident: ref43
  doi: 10.1109/CVPR.2008.4587723
– ident: ref8
  doi: 10.1109/ICCV.2005.10
– ident: ref37
  doi: 10.1109/CVPR.2006.132
– ident: ref44
  doi: 10.1109/CVPR.2007.383132
– ident: ref30
  doi: 10.1109/CVPR.2008.4587730
– ident: ref36
  doi: 10.1109/ICCV.2005.59
– ident: ref17
  doi: 10.1109/ICCV.2005.142
– volume: 1
  start-page: 1
  issue: 2
  year: 1998
  ident: ref3
  article-title: Recognizing People by Their Gait: The Shape of Motion
  publication-title: Videre
– volume-title: technical report, Massachusetts Inst. of Technology
  year: 2000
  ident: ref50
  article-title: Estimating a Dirichlet Distribution
– ident: ref2
  doi: 10.1109/ICCV.2003.1238420
– ident: ref9
  doi: 10.1109/CVPR.1992.223161
– ident: ref40
  doi: 10.1109/VSPETS.2005.1570899
– ident: ref32
  doi: 10.1109/TDPVT.2002.1024148
– ident: ref41
  doi: 10.1109/ICCV.2005.85
– start-page: 121
  volume-title: Proc. Defense Advanced Research Projects Agency Image Understanding Workshop
  ident: ref49
  article-title: An Iterative Image Registration Technique with an Application to Stereo Vision
– ident: ref19
  doi: 10.1109/ICCV.2005.77
– ident: ref26
  doi: 10.1109/CVPR.2008.4587721
– volume: 18
  volume-title: Advances in Neural Information Processing Systems
  year: 2006
  ident: ref13
  article-title: Correlated Topic Models
– ident: ref52
  doi: 10.1109/ICCV.2005.28
– ident: ref21
  doi: 10.7551/mitpress/7503.003.0026
– ident: ref39
  doi: 10.1109/ICPR.2004.1334462
– ident: ref34
  doi: 10.1109/CVPR.2007.383168
– ident: ref14
  doi: 10.1145/312624.312649
– ident: ref25
  doi: 10.1016/j.imavis.2008.02.008
– ident: ref5
  doi: 10.1007/3-540-47969-4_42
SSID ssj0014503
Score 2.466051
Snippet We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel "bag-of-words"...
First of all, the training is much easier due to the decoupling of the model parameters.
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1762
SubjectTerms Algorithms
bag-of-words
Cluster Analysis
Computer vision
Decoupling
event and activity understanding
Hidden Markov models
Human
Human action recognition
Human Activities - classification
Humans
Image analysis
Image motion analysis
Image recognition
Image sequence analysis
Image sequences
Labels
Locomotion - physiology
Mathematical models
Models, Biological
Motion
Movement - physiology
Object recognition
Pattern Recognition, Automated - methods
probabilistic graphical models
Recognition
Studies
Training
video analysis
Video sequences
Visual
Title Human Action Recognition by Semilatent Topic Models
URI https://ieeexplore.ieee.org/document/4785474
https://www.ncbi.nlm.nih.gov/pubmed/19696448
https://www.proquest.com/docview/857465321
https://www.proquest.com/docview/34991437
https://www.proquest.com/docview/67599420
https://www.proquest.com/docview/869849060
Volume 31
WOSCitedRecordID wos000268996500004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1939-3539
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9swDCWSYof2sG792LJsrQ5FT3Hr2JQlHYNhwXZYUbQpkJthyzQQIHWKJhnQf19KdpwelkNvAsyDLJISKVLvAVxEyDuvHFLg4KMCzDIKTFmWQZFENk8KVcqGbELd3Ojp1Nx2YNC-hSEi33xGV27oa_nFwq7dVdk1Ki1RYRe6SiX1W622YoDSsyBzBMMezmnEFk_zenI7-vunhqZEz5vjEGHQMf68OYo8t8ruMNMfN-PD9030E3xswkoxqu3gM3SoOoLDDWWDaDz4CA7e4A8eQ-yv8MXIv20Qd5teIh7nL-KeHmdzjkSrlZgsnmZWONq0-fIEHsa_Jj9_Bw2LQmB5E1wFJQcQxuQZBxakYls4zDtpck7jhlS6KyDWViYLa0uFNs5CGeUSE00KsWC15fEp7FWLir6CkGFCBetQJpihLUhLHcaKSk5mdZgP4x4MNgua2gZi3DFdzFOfaoQm9apw1JcmRRa_bMWfamyNXYLHbo1boWZ5e9DfaCttPG-ZaqkcZFw07MF5-5VdxtVBsooW62Uac5bHYaLaLcFZlDEYhT0QOyR0YjSaMGGRL7WdbH-hMa9v_591H_Z9Rco3BH6HvdXzmn7AB_tvNVs-n7FpT_WZN-1X2l7zkQ
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NT9wwEB3BthJwAAoUFmjxoeqJQDYZx_ZxVYFAhRWiW4lblDgTaaVtFu1Hpf57xk42cGAPvVnKHBzPjD3jGb8H8C1C3nlljwIHHxVgllFgyrIMiiSyeVKoUjZkE2ow0E9P5mENztu3METkm8_owg19Lb-Y2IW7KrtEpSUqXIcPEjEK69dabc0ApedB5hiGfZwTiVdEzcvhQ__-tganRM-c4zBh0HH-vDmMPLvK6kDTHzjXO_831V3YbgJL0a8t4ROsUbUHO0vSBtH48B5svUEg3IfYX-KLvn_dIB6X3UQ8zv-JX_RnNOZYtJqL4eR5ZIUjThvPDuD39dXwx03Q8CgElrfBeVByCGFMnnFoQSq2hUO9kybnRK5HpbsEYn1lsrC2VGjjLJRRLjHRpBALVlwef4ZONanoCIQMEypYizLBDG1BWuowVlRyOqvDvBd34Xy5oKltQMYd18U49clGaFKvCkd-aVJk8e-t-HONrrFKcN-tcSvULG8XTpbaShvfm6VaKgcaF_W6cNZ-ZadxlZCsoslilsac53GgqFZLcB5lDNtYF8QKCZ0YjSZMWOSwtpPXX2jM6_j9WZ_Bxs3w_i69ux38PIFNX5_y7YGn0JlPF_QFPtq_89Fs-tUb-Avr2fXw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Human+Action+Recognition+by+Semilatent+Topic+Models&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Wang%2C+Yang&rft.au=Mori%2C+G&rft.date=2009-10-01&rft.issn=0162-8828&rft.volume=31&rft.issue=10&rft.spage=1762&rft.epage=1774&rft_id=info:doi/10.1109%2FTPAMI.2009.43&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon