CapMatch: Semi-Supervised Contrastive Transformer Capsule With Feature-Based Knowledge Distillation for Human Activity Recognition

This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervise...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transaction on neural networks and learning systems Ročník 36; číslo 2; s. 2690 - 2704
Hlavní autoři: Xiao, Zhiwen, Tong, Huagang, Qu, Rong, Xing, Huanlai, Luo, Shouxi, Zhu, Zonghai, Song, Fuhong, Feng, Li
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.02.2025
Témata:
ISSN:2162-237X, 2162-2388, 2162-2388
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervised learning and unsupervised learning to extract rich representations from input data. In unsupervised learning, CapMatch leverages the pseudolabeling, contrastive learning (CL), and feature-based KD techniques to construct similarity learning on lower and higher level semantic information extracted from two augmentation versions of the data, "weak" and "timecut," to recognize the relationships among the obtained features of classes in the unlabeled data. CapMatch combines the outputs of the weak- and timecut-augmented models to form pseudolabeling and thus CL. Meanwhile, CapMatch uses the feature-based KD to transfer knowledge from the intermediate layers of the weak-augmented model to those of the timecut-augmented model. To effectively capture both local and global patterns of HAR data, we design a capsule transformer network consisting of four capsule-based transformer blocks and one routing layer. Experimental results show that compared with a number of state-of-the-art semi-supervised and supervised algorithms, the proposed CapMatch achieves decent performance on three commonly used HAR datasets, namely, HAPT, WISDM, and UCI_HAR. With only 10% of data labeled, CapMatch achieves <inline-formula> <tex-math notation="LaTeX">F_{1} </tex-math></inline-formula> values of higher than 85.00% on these datasets, outperforming 14 semi-supervised algorithms. When the proportion of labeled data reaches 30%, CapMatch obtains <inline-formula> <tex-math notation="LaTeX">F_{1} </tex-math></inline-formula> values of no lower than 88.00% on the datasets above, which is better than several classical supervised algorithms, e.g., decision tree and <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula>-nearest neighbor (KNN).
AbstractList This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervised learning and unsupervised learning to extract rich representations from input data. In unsupervised learning, CapMatch leverages the pseudolabeling, contrastive learning (CL), and feature-based KD techniques to construct similarity learning on lower and higher level semantic information extracted from two augmentation versions of the data, "weak" and "timecut," to recognize the relationships among the obtained features of classes in the unlabeled data. CapMatch combines the outputs of the weak- and timecut-augmented models to form pseudolabeling and thus CL. Meanwhile, CapMatch uses the feature-based KD to transfer knowledge from the intermediate layers of the weak-augmented model to those of the timecut-augmented model. To effectively capture both local and global patterns of HAR data, we design a capsule transformer network consisting of four capsule-based transformer blocks and one routing layer. Experimental results show that compared with a number of state-of-the-art semi-supervised and supervised algorithms, the proposed CapMatch achieves decent performance on three commonly used HAR datasets, namely, HAPT, WISDM, and UCI_HAR. With only 10% of data labeled, CapMatch achieves <inline-formula> <tex-math notation="LaTeX">F_{1} </tex-math></inline-formula> values of higher than 85.00% on these datasets, outperforming 14 semi-supervised algorithms. When the proportion of labeled data reaches 30%, CapMatch obtains <inline-formula> <tex-math notation="LaTeX">F_{1} </tex-math></inline-formula> values of no lower than 88.00% on the datasets above, which is better than several classical supervised algorithms, e.g., decision tree and <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula>-nearest neighbor (KNN).
This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervised learning and unsupervised learning to extract rich representations from input data. In unsupervised learning, CapMatch leverages the pseudolabeling, contrastive learning (CL), and feature-based KD techniques to construct similarity learning on lower and higher level semantic information extracted from two augmentation versions of the data, "weak" and "timecut," to recognize the relationships among the obtained features of classes in the unlabeled data. CapMatch combines the outputs of the weak- and timecut-augmented models to form pseudolabeling and thus CL. Meanwhile, CapMatch uses the feature-based KD to transfer knowledge from the intermediate layers of the weak-augmented model to those of the timecut-augmented model. To effectively capture both local and global patterns of HAR data, we design a capsule transformer network consisting of four capsule-based transformer blocks and one routing layer. Experimental results show that compared with a number of state-of-the-art semi-supervised and supervised algorithms, the proposed CapMatch achieves decent performance on three commonly used HAR datasets, namely, HAPT, WISDM, and UCI_HAR. With only 10% of data labeled, CapMatch achieves values of higher than 85.00% on these datasets, outperforming 14 semi-supervised algorithms. When the proportion of labeled data reaches 30%, CapMatch obtains values of no lower than 88.00% on the datasets above, which is better than several classical supervised algorithms, e.g., decision tree and -nearest neighbor (KNN).
This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervised learning and unsupervised learning to extract rich representations from input data. In unsupervised learning, CapMatch leverages the pseudolabeling, contrastive learning (CL), and feature-based KD techniques to construct similarity learning on lower and higher level semantic information extracted from two augmentation versions of the data, "weak" and "timecut," to recognize the relationships among the obtained features of classes in the unlabeled data. CapMatch combines the outputs of the weak- and timecut-augmented models to form pseudolabeling and thus CL. Meanwhile, CapMatch uses the feature-based KD to transfer knowledge from the intermediate layers of the weak-augmented model to those of the timecut-augmented model. To effectively capture both local and global patterns of HAR data, we design a capsule transformer network consisting of four capsule-based transformer blocks and one routing layer. Experimental results show that compared with a number of state-of-the-art semi-supervised and supervised algorithms, the proposed CapMatch achieves decent performance on three commonly used HAR datasets, namely, HAPT, WISDM, and UCI_HAR. With only 10% of data labeled, CapMatch achieves values of higher than 85.00% on these datasets, outperforming 14 semi-supervised algorithms. When the proportion of labeled data reaches 30%, CapMatch obtains values of no lower than 88.00% on the datasets above, which is better than several classical supervised algorithms, e.g., decision tree and -nearest neighbor (KNN).This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervised learning and unsupervised learning to extract rich representations from input data. In unsupervised learning, CapMatch leverages the pseudolabeling, contrastive learning (CL), and feature-based KD techniques to construct similarity learning on lower and higher level semantic information extracted from two augmentation versions of the data, "weak" and "timecut," to recognize the relationships among the obtained features of classes in the unlabeled data. CapMatch combines the outputs of the weak- and timecut-augmented models to form pseudolabeling and thus CL. Meanwhile, CapMatch uses the feature-based KD to transfer knowledge from the intermediate layers of the weak-augmented model to those of the timecut-augmented model. To effectively capture both local and global patterns of HAR data, we design a capsule transformer network consisting of four capsule-based transformer blocks and one routing layer. Experimental results show that compared with a number of state-of-the-art semi-supervised and supervised algorithms, the proposed CapMatch achieves decent performance on three commonly used HAR datasets, namely, HAPT, WISDM, and UCI_HAR. With only 10% of data labeled, CapMatch achieves values of higher than 85.00% on these datasets, outperforming 14 semi-supervised algorithms. When the proportion of labeled data reaches 30%, CapMatch obtains values of no lower than 88.00% on the datasets above, which is better than several classical supervised algorithms, e.g., decision tree and -nearest neighbor (KNN).
Author Xiao, Zhiwen
Song, Fuhong
Tong, Huagang
Feng, Li
Zhu, Zonghai
Qu, Rong
Luo, Shouxi
Xing, Huanlai
Author_xml – sequence: 1
  givenname: Zhiwen
  orcidid: 0000-0001-9651-111X
  surname: Xiao
  fullname: Xiao, Zhiwen
  email: xiao1994zw@163.com
  organization: School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China
– sequence: 2
  givenname: Huagang
  surname: Tong
  fullname: Tong, Huagang
  email: huagangtong@gmail.com
  organization: College of Economic and Management, Nanjing Tech University, Nanjing, China
– sequence: 3
  givenname: Rong
  orcidid: 0000-0001-8318-7509
  surname: Qu
  fullname: Qu, Rong
  email: rong.qu@nottingham.ac.uk
  organization: School of Computer Science, University of Nottingham, Nottingham, U.K
– sequence: 4
  givenname: Huanlai
  orcidid: 0000-0002-6345-7265
  surname: Xing
  fullname: Xing, Huanlai
  email: hxx@home.swjtu.edu.cn
  organization: School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China
– sequence: 5
  givenname: Shouxi
  orcidid: 0000-0002-4041-3681
  surname: Luo
  fullname: Luo, Shouxi
  email: sxluo@swjtu.edu.cn
  organization: School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China
– sequence: 6
  givenname: Zonghai
  orcidid: 0000-0002-2915-0964
  surname: Zhu
  fullname: Zhu, Zonghai
  email: zzhu@swjtu.edu.cn
  organization: School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China
– sequence: 7
  givenname: Fuhong
  orcidid: 0009-0007-1482-3744
  surname: Song
  fullname: Song, Fuhong
  email: fhsong@mail.gufe.edu.cn
  organization: School of Information, Guizhou University of Finance and Economics, Guiyang, China
– sequence: 8
  givenname: Li
  surname: Feng
  fullname: Feng, Li
  email: fengli@swjtu.edu.cn
  organization: School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38150344$$D View this record in MEDLINE/PubMed
BookMark eNp9kU1v1DAQhi1UREvpH0AI-dhLtv7Ih8OtbCmt2BaJXQS3aOJMWqPEXmynqFd-Od7utkIc8GUs-X1mPO_7kuxZZ5GQ15zNOGf1yer6erGcCSbkTMo8F3X-jBwIXopMSKX2nu7V931yFMIPlk7JijKvX5B9qXjBEnVAfs9hfQVR376jSxxNtpzW6O9MwI7OnY0eQjR3SFcebOidH9HTRIRpQPrNxFt6jhAnj9l72CCfrPs1YHeD9MwkcBggGmdpAunFNIKlpzq1M_GefkHtbqzZPL8iz3sYAh7t6iH5ev5hNb_IFp8_Xs5PF5mWpYpZp6DHSkDZqpZBC9C1pRK9qHTdVVLJQvd5BW3asO-A5yAYQCF4XQIve647eUiOt33X3v2cMMRmNEFj-qRFN4VG1KzidVHlRZK-3UmndsSuWXszgr9vHn1LArEVaO9C8Ng_SThrNvk0D_k0m3yaXT4JUv9A2sQHh5LPZvg_-maLGkT8a5asCs6F_AMWsqBu
CODEN ITNNAL
CitedBy_id crossref_primary_10_1016_j_eswa_2024_124146
crossref_primary_10_1007_s12530_024_09629_x
crossref_primary_10_1109_OJCS_2025_3565185
crossref_primary_10_1016_j_engappai_2024_108987
crossref_primary_10_1007_s13198_024_02521_6
crossref_primary_10_1007_s40747_024_01615_9
crossref_primary_10_1007_s40747_024_01643_5
crossref_primary_10_1007_s11042_024_20505_3
crossref_primary_10_1016_j_bspc_2024_107148
crossref_primary_10_1016_j_knosys_2024_112120
crossref_primary_10_1007_s10489_024_06090_w
crossref_primary_10_1016_j_engappai_2024_109320
crossref_primary_10_1016_j_neunet_2024_106978
crossref_primary_10_1007_s10489_024_06113_6
crossref_primary_10_1007_s10489_024_05982_1
crossref_primary_10_1007_s10845_025_02632_2
crossref_primary_10_1007_s11042_025_21086_5
crossref_primary_10_1109_TNNLS_2025_3556317
crossref_primary_10_1016_j_bspc_2024_106769
crossref_primary_10_1016_j_ins_2024_121057
crossref_primary_10_1007_s13198_024_02699_9
crossref_primary_10_1007_s13198_025_02803_7
crossref_primary_10_1177_09544070241272761
crossref_primary_10_1007_s11042_024_19895_1
crossref_primary_10_1016_j_bspc_2024_107199
crossref_primary_10_1007_s12530_024_09579_4
crossref_primary_10_1016_j_bspc_2024_106581
crossref_primary_10_1016_j_amc_2023_128412
crossref_primary_10_1177_18761364241296439
crossref_primary_10_1007_s40815_025_01996_0
crossref_primary_10_1007_s40747_024_01441_z
crossref_primary_10_1007_s40747_024_01650_6
crossref_primary_10_1016_j_eswa_2024_125946
crossref_primary_10_1007_s40747_024_01461_9
crossref_primary_10_1016_j_ins_2025_122099
crossref_primary_10_1016_j_neunet_2024_106700
crossref_primary_10_1007_s13198_025_02754_z
crossref_primary_10_1109_TIM_2025_3550246
crossref_primary_10_1007_s12530_024_09620_6
crossref_primary_10_1016_j_engappai_2024_109236
crossref_primary_10_1007_s10489_024_05819_x
crossref_primary_10_1007_s11517_025_03382_2
crossref_primary_10_1016_j_engappai_2024_108586
crossref_primary_10_1016_j_neucom_2024_128115
crossref_primary_10_1007_s11517_024_03095_y
crossref_primary_10_1016_j_eswa_2025_128236
crossref_primary_10_1016_j_ins_2024_121742
crossref_primary_10_1007_s10489_024_06025_5
crossref_primary_10_1016_j_engappai_2025_111801
crossref_primary_10_1016_j_knosys_2024_111729
crossref_primary_10_1109_TIM_2025_3563045
crossref_primary_10_1007_s11517_024_03076_1
crossref_primary_10_1016_j_engappai_2024_108921
crossref_primary_10_1016_j_bspc_2024_107008
crossref_primary_10_1016_j_bspc_2024_106553
crossref_primary_10_1016_j_engappai_2024_108966
crossref_primary_10_1007_s13198_024_02568_5
crossref_primary_10_1007_s13721_025_00513_5
crossref_primary_10_1007_s10489_024_05964_3
crossref_primary_10_1007_s13198_024_02480_y
crossref_primary_10_1016_j_engappai_2024_108653
crossref_primary_10_1016_j_engappai_2024_109069
crossref_primary_10_1016_j_jfranklin_2024_107143
crossref_primary_10_1016_j_neucom_2025_130830
crossref_primary_10_3390_ai6090218
crossref_primary_10_1007_s11042_024_20075_4
crossref_primary_10_1109_LSP_2025_3569477
crossref_primary_10_1007_s12530_024_09639_9
crossref_primary_10_1016_j_eswa_2024_124554
crossref_primary_10_1016_j_bspc_2024_106128
crossref_primary_10_1016_j_bspc_2024_106568
crossref_primary_10_1016_j_engappai_2024_107940
crossref_primary_10_1016_j_engappai_2024_109800
crossref_primary_10_1007_s11042_024_20379_5
crossref_primary_10_1016_j_engappai_2024_108437
crossref_primary_10_1016_j_bspc_2024_107336
crossref_primary_10_1016_j_engappai_2024_109011
crossref_primary_10_1007_s11517_024_03227_4
crossref_primary_10_1016_j_engappai_2024_109652
crossref_primary_10_1007_s40815_025_01998_y
crossref_primary_10_1007_s10489_024_05811_5
crossref_primary_10_1007_s40815_025_01976_4
crossref_primary_10_1007_s13198_024_02472_y
crossref_primary_10_1016_j_eswa_2024_125681
crossref_primary_10_1016_j_ins_2024_120712
crossref_primary_10_1016_j_knosys_2024_112154
Cites_doi 10.1109/TNNLS.2021.3068344
10.1109/TCSS.2020.3014128
10.1109/TIM.2022.3201203
10.1016/j.knosys.2021.106934
10.1109/CVPR46437.2021.01057
10.1145/1964897.1964918
10.1109/IJCB52358.2021.9484410
10.1609/aaai.v33i01.33017699
10.1007/s10994-019-05855-6
10.1109/CVPR42600.2020.00975
10.1109/TCYB.2021.3125320
10.1109/TMM.2022.3156938
10.1109/TII.2013.2255061
10.1109/TPAMI.2009.83
10.1109/TIM.2021.3111996
10.1109/TASLP.2021.3105013
10.1109/TCYB.2021.3090370
10.1007/978-3-642-35395-6_30
10.1109/TNNLS.2023.3271140
10.1145/3448112
10.1109/TGRS.2023.3241342
10.1016/j.neucom.2015.07.085
10.1016/j.patcog.2018.12.026
10.3390/s20195707
10.1109/TII.2020.2976812
10.1109/BSN.2016.7516235
10.1016/j.eswa.2019.04.057
10.1109/JIOT.2019.2949715
10.1109/JBHI.2020.3023246
10.1109/TPAMI.2020.2992393
10.48550/arXiv.1503.02531
10.1145/2487575.2487633
10.1109/TNNLS.2021.3053576
10.1109/TIP.2022.3148814
10.1016/j.pmcj.2014.05.006
10.1109/TII.2022.3209672
10.1109/TIM.2022.3164145
10.1109/TNNLS.2020.2978942
10.1145/2971648.2971701
10.1109/ISWC.2008.4911590
10.1109/TIP.2021.3101158
10.1109/TIM.2022.3158427
10.1109/TNNLS.2019.2927224
10.1109/TSMCA.2010.2093883
10.1109/JIOT.2018.2823084
10.1109/LGRS.2021.3069799
10.1109/TNNLS.2023.3288139
10.1038/nature14539
10.1109/TGRS.2021.3089929
10.1145/3517246
10.1109/TIM.2020.2968161
10.1007/s00521-021-05793-2
10.1109/RTCSA.2007.17
10.1109/TIM.2022.3164162
10.1109/CVPR.2019.00302
10.1007/978-3-030-58555-6_18
10.1109/TPAMI.2013.50
10.1109/JBHI.2019.2918412
10.1109/TMM.2022.3192663
10.1007/s11263-021-01453-z
10.1007/s10115-017-1090-9
10.1109/TII.2017.2712746
10.1109/TII.2022.3165875
10.48550/ARXIV.1706.03762
10.1109/ISWC.2009.24
10.1007/s12652-022-03768-2
10.1155/2022/1391906
10.1016/j.neucom.2019.12.150
10.1109/JSYST.2022.3153503
10.1109/TCYB.2020.3007506
10.1016/j.knosys.2021.107338
10.1109/TIM.2021.3102735
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1109/TNNLS.2023.3344294
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList
MEDLINE
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 2704
ExternalDocumentID 38150344
10_1109_TNNLS_2023_3344294
10375112
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: Humanities and Social Sciences Program of the Ministry of Education
  grantid: 23YJCZH201
– fundername: Fundamental Research Funds for the Central Universities, China
  funderid: 10.13039/501100012226
– fundername: Natural Science Foundation of Hebei Province
  grantid: F2022105027
  funderid: 10.13039/501100003787
– fundername: Natural Science Foundation of Sichuan Province
  grantid: 2022NSFSC0568; 2022NSFSC0944; 2023NSFSC0459
  funderid: 10.13039/501100018542
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
RIG
7X8
ID FETCH-LOGICAL-c368t-d8afe72a6b8b0abaadb682f27c9d73835cf47ab564fda14a20aa52196a16f1cd3
IEDL.DBID RIE
ISICitedReferencesCount 119
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001134412500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2162-237X
2162-2388
IngestDate Thu Oct 02 12:55:07 EDT 2025
Mon Jul 21 05:55:31 EDT 2025
Sat Nov 29 01:40:29 EST 2025
Tue Nov 18 22:18:28 EST 2025
Wed Aug 27 01:52:59 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 2
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c368t-d8afe72a6b8b0abaadb682f27c9d73835cf47ab564fda14a20aa52196a16f1cd3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-2915-0964
0009-0007-1482-3744
0000-0002-4041-3681
0000-0001-9651-111X
0000-0001-8318-7509
0000-0002-6345-7265
OpenAccessLink https://nottingham-repository.worktribe.com/output/29001505
PMID 38150344
PQID 2907195745
PQPubID 23479
PageCount 15
ParticipantIDs proquest_miscellaneous_2907195745
pubmed_primary_38150344
crossref_citationtrail_10_1109_TNNLS_2023_3344294
ieee_primary_10375112
crossref_primary_10_1109_TNNLS_2023_3344294
PublicationCentury 2000
PublicationDate 2025-02-01
PublicationDateYYYYMMDD 2025-02-01
PublicationDate_xml – month: 02
  year: 2025
  text: 2025-02-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref58
ref52
ref11
Tang (ref46) 2020
ref55
ref10
ref54
Liu (ref72)
ref17
ref16
ref19
ref18
Berthelot (ref27)
Xie (ref84)
ref51
ref50
ref45
ref48
ref47
ref42
ref41
ref85
ref44
ref43
Zhang (ref30)
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref82
ref81
ref40
Duan (ref74) 2020
ref83
Sohn (ref29)
Sabour (ref34)
ref80
Berthelot (ref28)
ref35
ref79
ref78
ref37
ref36
ref31
ref33
ref77
Anguita (ref1)
ref32
ref76
ref2
ref39
ref38
Romero (ref66)
ref71
ref70
ref73
DeVries (ref75) 2017
ref24
ref68
ref23
ref67
ref26
ref25
Chen (ref53)
ref69
ref20
ref64
ref63
ref22
ref21
ref65
ref60
ref62
ref61
References_xml – ident: ref60
  doi: 10.1109/TNNLS.2021.3068344
– ident: ref73
  doi: 10.1109/TCSS.2020.3014128
– ident: ref8
  doi: 10.1109/TIM.2022.3201203
– start-page: 3857
  volume-title: Proc. Adv. Neural Inf. Proces. Syst.
  ident: ref34
  article-title: Dynamic routing between capsules
– ident: ref37
  doi: 10.1016/j.knosys.2021.106934
– ident: ref76
  doi: 10.1109/CVPR46437.2021.01057
– ident: ref78
  doi: 10.1145/1964897.1964918
– ident: ref25
  doi: 10.1109/IJCB52358.2021.9484410
– ident: ref23
  doi: 10.1609/aaai.v33i01.33017699
– ident: ref15
  doi: 10.1007/s10994-019-05855-6
– ident: ref52
  doi: 10.1109/CVPR42600.2020.00975
– ident: ref64
  doi: 10.1109/TCYB.2021.3125320
– ident: ref36
  doi: 10.1109/TMM.2022.3156938
– ident: ref40
  doi: 10.1109/TII.2013.2255061
– ident: ref39
  doi: 10.1109/TPAMI.2009.83
– ident: ref13
  doi: 10.1109/TIM.2021.3111996
– ident: ref59
  doi: 10.1109/TASLP.2021.3105013
– ident: ref55
  doi: 10.1109/TCYB.2021.3090370
– ident: ref79
  doi: 10.1007/978-3-642-35395-6_30
– ident: ref54
  doi: 10.1109/TNNLS.2023.3271140
– ident: ref24
  doi: 10.1145/3448112
– ident: ref67
  doi: 10.1109/TGRS.2023.3241342
– ident: ref77
  doi: 10.1016/j.neucom.2015.07.085
– ident: ref81
  doi: 10.1016/j.patcog.2018.12.026
– ident: ref49
  doi: 10.3390/s20195707
– ident: ref45
  doi: 10.1109/TII.2020.2976812
– ident: ref42
  doi: 10.1109/BSN.2016.7516235
– ident: ref7
  doi: 10.1016/j.eswa.2019.04.057
– ident: ref43
  doi: 10.1109/JIOT.2019.2949715
– ident: ref57
  doi: 10.1109/JBHI.2020.3023246
– start-page: 437
  volume-title: Proc. 21st Eur. Symp. Artif. Neural Netw., Comput. Intell. Mach. Learn.
  ident: ref1
  article-title: A public domain dataset for human activity recognition using smartphones
– year: 2017
  ident: ref75
  article-title: Improved regularization of convolutional neural networks with cutout
  publication-title: arXiv:1708.04552
– ident: ref33
  doi: 10.1109/TPAMI.2020.2992393
– ident: ref62
  doi: 10.48550/arXiv.1503.02531
– ident: ref82
  doi: 10.1145/2487575.2487633
– ident: ref2
  doi: 10.1109/TNNLS.2021.3053576
– ident: ref56
  doi: 10.1109/TIP.2022.3148814
– start-page: 1
  volume-title: Proc. Adv. Neural Inf. Proces. Syst.
  ident: ref84
  article-title: Unsupervised data augmentation for consistency training
– ident: ref85
  doi: 10.1016/j.pmcj.2014.05.006
– ident: ref70
  doi: 10.1109/TII.2022.3209672
– ident: ref4
  doi: 10.1109/TIM.2022.3164145
– ident: ref14
  doi: 10.1109/TNNLS.2020.2978942
– ident: ref17
  doi: 10.1145/2971648.2971701
– ident: ref21
  doi: 10.1109/ISWC.2008.4911590
– ident: ref63
  doi: 10.1109/TIP.2021.3101158
– start-page: 1
  volume-title: Proc. Adv. Neural Inf. Proces. Syst.
  ident: ref30
  article-title: FlexMatch: Boosting semi-supervised learning with curriculum pseudo labeling
– ident: ref47
  doi: 10.1109/TIM.2022.3158427
– start-page: 66
  volume-title: Proc. FinNLP@IJCAI
  ident: ref72
  article-title: Transformer-based capsule network for stock movements prediction
– ident: ref19
  doi: 10.1109/TNNLS.2019.2927224
– ident: ref9
  doi: 10.1109/TSMCA.2010.2093883
– ident: ref44
  doi: 10.1109/JIOT.2018.2823084
– ident: ref58
  doi: 10.1109/LGRS.2021.3069799
– ident: ref61
  doi: 10.1109/TNNLS.2023.3288139
– ident: ref11
  doi: 10.1038/nature14539
– ident: ref50
  doi: 10.1109/TGRS.2021.3089929
– ident: ref26
  doi: 10.1145/3517246
– ident: ref35
  doi: 10.1109/TIM.2020.2968161
– start-page: 1
  volume-title: Proc. Adv. Neural Inf. Proces. Syst.
  ident: ref29
  article-title: FixMatch: Simplifying semi-supervised learning with consistency and confidence
– ident: ref83
  doi: 10.1007/s00521-021-05793-2
– ident: ref20
  doi: 10.1109/RTCSA.2007.17
– ident: ref3
  doi: 10.1109/TIM.2022.3164162
– ident: ref69
  doi: 10.1109/CVPR.2019.00302
– ident: ref71
  doi: 10.1007/978-3-030-58555-6_18
– ident: ref31
  doi: 10.1109/TPAMI.2013.50
– start-page: 1575
  volume-title: Proc. Int. Conf. Machin. Learn.
  ident: ref53
  article-title: A simple framework for contrastive learning of visual representations
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref28
  article-title: RemixMatch: Semi-supervised learning with distribution alignment and augmentation
– ident: ref41
  doi: 10.1109/JBHI.2019.2918412
– ident: ref68
  doi: 10.1109/TMM.2022.3192663
– ident: ref32
  doi: 10.1007/s11263-021-01453-z
– ident: ref80
  doi: 10.1007/s10115-017-1090-9
– start-page: 5050
  volume-title: Proc. Adv. Neural Inf. Proces. Syst.
  ident: ref27
  article-title: MixMatch: A holistic approach to semi-supervised learning
– ident: ref10
  doi: 10.1109/TII.2017.2712746
– ident: ref12
  doi: 10.1109/TII.2022.3165875
– ident: ref38
  doi: 10.48550/ARXIV.1706.03762
– year: 2020
  ident: ref46
  article-title: Layer-wise training convolutional neural networks with smaller filters for human activity recognition using wearable sensors
  publication-title: arXiv:2005.03948
– year: 2020
  ident: ref74
  article-title: Capsule-transformer for neural machine translation
  publication-title: arXiv:2004.14649
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref66
  article-title: FitNet: Hints for thin deep nets
– ident: ref16
  doi: 10.1109/ISWC.2009.24
– ident: ref22
  doi: 10.1007/s12652-022-03768-2
– ident: ref5
  doi: 10.1155/2022/1391906
– ident: ref18
  doi: 10.1016/j.neucom.2019.12.150
– ident: ref51
  doi: 10.1109/JSYST.2022.3153503
– ident: ref65
  doi: 10.1109/TCYB.2020.3007506
– ident: ref6
  doi: 10.1016/j.knosys.2021.107338
– ident: ref48
  doi: 10.1109/TIM.2021.3102735
SSID ssj0000605649
Score 2.6940281
Snippet This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2690
SubjectTerms Algorithms
Capsule network (CapNet)
Classification algorithms
contrastive learning (CL)
Data mining
Feature extraction
Human Activities - classification
Human activity recognition
human activity recognition (HAR)
Humans
knowledge distillation (KD)
Neural Networks, Computer
Pattern Recognition, Automated - methods
Semantics
semi-supervised learning (SSL)
similarity learning
Supervised Machine Learning
Transformers
Unsupervised learning
Unsupervised Machine Learning
Wearable Electronic Devices
wearable sensors
Title CapMatch: Semi-Supervised Contrastive Transformer Capsule With Feature-Based Knowledge Distillation for Human Activity Recognition
URI https://ieeexplore.ieee.org/document/10375112
https://www.ncbi.nlm.nih.gov/pubmed/38150344
https://www.proquest.com/docview/2907195745
Volume 36
WOSCitedRecordID wos001134412500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2162-2388
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000605649
  issn: 2162-237X
  databaseCode: RIE
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lj9MwELbYFQcuLI8FymNlJG7I3cR52OG2LKyQgArRInqLxi9tpZJWTcMP4Jcz4yQVHBaJWw6ZKNI3nvnG9szH2CuAgEQUvPAYH0WOOU8AaCOy3DgfSot1ro5iE2o208tl9WVoVo-9MN77ePnMT-kxnuW7je1oq-ycetqIIByxI6XKvlnrsKGSIDEvI92VaSmFzNRybJJJqvPFbPZpPiWt8GmW5RiESZAHs1VBI-_-yklRZOVmvhnzztXJf_7xPXZ3IJj8oveI--yWbx6wk1G8gQ9r-SH7dQnbzxiHr9_wuf-xEvNuS2Gj9Y7TxKodtBQI-WIktmiLFm239vz7an_NiTt2Oy_eApl8HLfm-DsKGuv-hh1HQx6PCfiF7WUq-NfxxtKmOWXfrt4vLj-IQZBB2KzUe-E0BK8klEabBAyAM6WWQSpbOYWlbmFDrsAgDMFBmoNMAJAeVCWkZUityx6x42bT-CeMpyFT0oZCGYk1GhUtJtNW2Vz5xHmTT1g6QlLbYVo5iWas61i1JFUdEa0J0XpAdMJeH2y2_ayOf759Snj98WYP1YS9HKGvcanR-Qk0ftO1tayQj1WFyosJe9z7xMF6dKWnN3z1GbsjSTk43vd-zo73u86_YLftz_2q3Z2hPy_1WfTn3xgW8Vo
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9QwELWgINEL5aPAQgEjcUPeJo4TJ9xKoSrqNkLsIvYWjb_UlbbZ1WbDD-gvx-MkKzgUiVsOmSjSG8-8sT3zCHkP4DwRBcusj49M-JzHAHLFEqGMdZn2dW4exCZkWebzefGtb1YPvTDW2nD5zI7xMZzlm5VucavsGHvakCDcJfdSIXjUtWvttlQiT82zQHh5nHHGEzkf2mSi4nhWlpPpGNXCx0nirQuU5PH5KsWhd39lpSCzcjvjDJnn7OA___kRedhTTHrS-cRjcsfWT8jBIN9A-9X8lNycwvrSR-Krj3Rqrxds2q4xcDTWUJxZtYEGQyGdDdTW23qLpl1a-nOxvaLIHtuNZZ8ATS6GzTn6GcPGsrtjR70hDQcF9ER3QhX0-3BnaVUfkh9nX2an56yXZGA6yfItMzk4KzlkKlcRKACjspw7LnVhpC92U-2EBOVhcAZiATwC8AShyCDOXKxN8ozs1avaviA0donk2qVScV-lYdmiklxLLaSNjFViROIBkkr388pRNmNZhbolKqqAaIWIVj2iI_JhZ7PupnX88-1DxOuPNzuoRuTdAH3lFxueoEBtV21T8cIzsiKVIh2R551P7KwHV3p5y1ffkgfns8tJNflaXrwi-xx1hMPt7yOyt9209jW5r39tF83mTfDq3xge87k
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=CapMatch%3A+Semi-Supervised+Contrastive+Transformer+Capsule+With+Feature-Based+Knowledge+Distillation+for+Human+Activity+Recognition&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Xiao%2C+Zhiwen&rft.au=Tong%2C+Huagang&rft.au=Qu%2C+Rong&rft.au=Xing%2C+Huanlai&rft.date=2025-02-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=36&rft.issue=2&rft.spage=2690&rft.epage=2704&rft_id=info:doi/10.1109%2FTNNLS.2023.3344294&rft_id=info%3Apmid%2F38150344&rft.externalDocID=10375112
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon