DiamondNet: A Neural-Network-Based Heterogeneous Sensor Attentive Fusion for Human Activity Recognition
With the proliferation of intelligent sensors integrated into mobile devices, fine-grained human activity recognition (HAR) based on lightweight sensors has emerged as a useful tool for personalized applications. Although shallow and deep learning algorithms have been proposed for HAR problems in th...
Saved in:
| Published in: | IEEE transaction on neural networks and learning systems Vol. 35; no. 11; pp. 15321 - 15331 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.11.2024
|
| Subjects: | |
| ISSN: | 2162-237X, 2162-2388, 2162-2388 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | With the proliferation of intelligent sensors integrated into mobile devices, fine-grained human activity recognition (HAR) based on lightweight sensors has emerged as a useful tool for personalized applications. Although shallow and deep learning algorithms have been proposed for HAR problems in the past decades, these methods have limited capability to exploit semantic features from multiple sensor types. To address this limitation, we propose a novel HAR framework, DiamondNet, which can create heterogeneous multisensor modalities, denoise, extract, and fuse features from a fresh perspective. In DiamondNet, we leverage multiple 1-D convolutional denoising autoencoders (1-D-CDAEs) to extract robust encoder features. We further introduce an attention-based graph convolutional network to construct new heterogeneous multisensor modalities, which adaptively exploit the potential relationship between different sensors. Moreover, the proposed attentive fusion subnet, which jointly employs a global-attention mechanism and shallow features, effectively calibrates different-level features of multiple sensor modalities. This approach amplifies informative features and provides a comprehensive and robust perception for HAR. The efficacy of the DiamondNet framework is validated on three public datasets. The experimental results demonstrate that our proposed DiamondNet outperforms other state-of-the-art baselines, achieving remarkable and consistent accuracy improvements. Overall, our work introduces a new perspective on HAR, leveraging the power of multiple sensor modalities and attention mechanisms to significantly improve the performance. |
|---|---|
| AbstractList | With the proliferation of intelligent sensors integrated into mobile devices, fine-grained human activity recognition (HAR) based on lightweight sensors has emerged as a useful tool for personalized applications. Although shallow and deep learning algorithms have been proposed for HAR problems in the past decades, these methods have limited capability to exploit semantic features from multiple sensor types. To address this limitation, we propose a novel HAR framework, DiamondNet, which can create heterogeneous multisensor modalities, denoise, extract, and fuse features from a fresh perspective. In DiamondNet, we leverage multiple 1-D convolutional denoising autoencoders (1-D-CDAEs) to extract robust encoder features. We further introduce an attention-based graph convolutional network to construct new heterogeneous multisensor modalities, which adaptively exploit the potential relationship between different sensors. Moreover, the proposed attentive fusion subnet, which jointly employs a global-attention mechanism and shallow features, effectively calibrates different-level features of multiple sensor modalities. This approach amplifies informative features and provides a comprehensive and robust perception for HAR. The efficacy of the DiamondNet framework is validated on three public datasets. The experimental results demonstrate that our proposed DiamondNet outperforms other state-of-the-art baselines, achieving remarkable and consistent accuracy improvements. Overall, our work introduces a new perspective on HAR, leveraging the power of multiple sensor modalities and attention mechanisms to significantly improve the performance.With the proliferation of intelligent sensors integrated into mobile devices, fine-grained human activity recognition (HAR) based on lightweight sensors has emerged as a useful tool for personalized applications. Although shallow and deep learning algorithms have been proposed for HAR problems in the past decades, these methods have limited capability to exploit semantic features from multiple sensor types. To address this limitation, we propose a novel HAR framework, DiamondNet, which can create heterogeneous multisensor modalities, denoise, extract, and fuse features from a fresh perspective. In DiamondNet, we leverage multiple 1-D convolutional denoising autoencoders (1-D-CDAEs) to extract robust encoder features. We further introduce an attention-based graph convolutional network to construct new heterogeneous multisensor modalities, which adaptively exploit the potential relationship between different sensors. Moreover, the proposed attentive fusion subnet, which jointly employs a global-attention mechanism and shallow features, effectively calibrates different-level features of multiple sensor modalities. This approach amplifies informative features and provides a comprehensive and robust perception for HAR. The efficacy of the DiamondNet framework is validated on three public datasets. The experimental results demonstrate that our proposed DiamondNet outperforms other state-of-the-art baselines, achieving remarkable and consistent accuracy improvements. Overall, our work introduces a new perspective on HAR, leveraging the power of multiple sensor modalities and attention mechanisms to significantly improve the performance. With the proliferation of intelligent sensors integrated into mobile devices, fine-grained human activity recognition (HAR) based on lightweight sensors has emerged as a useful tool for personalized applications. Although shallow and deep learning algorithms have been proposed for HAR problems in the past decades, these methods have limited capability to exploit semantic features from multiple sensor types. To address this limitation, we propose a novel HAR framework, DiamondNet, which can create heterogeneous multisensor modalities, denoise, extract, and fuse features from a fresh perspective. In DiamondNet, we leverage multiple 1-D convolutional denoising autoencoders (1-D-CDAEs) to extract robust encoder features. We further introduce an attention-based graph convolutional network to construct new heterogeneous multisensor modalities, which adaptively exploit the potential relationship between different sensors. Moreover, the proposed attentive fusion subnet, which jointly employs a global-attention mechanism and shallow features, effectively calibrates different-level features of multiple sensor modalities. This approach amplifies informative features and provides a comprehensive and robust perception for HAR. The efficacy of the DiamondNet framework is validated on three public datasets. The experimental results demonstrate that our proposed DiamondNet outperforms other state-of-the-art baselines, achieving remarkable and consistent accuracy improvements. Overall, our work introduces a new perspective on HAR, leveraging the power of multiple sensor modalities and attention mechanisms to significantly improve the performance. |
| Author | Zhao, Fang Zhu, Yida Chen, Runze Luo, Haiyong |
| Author_xml | – sequence: 1 givenname: Yida orcidid: 0000-0001-8643-9150 surname: Zhu fullname: Zhu, Yida email: dozenpiggy@bupt.edu.cn organization: School of Software Engineering, Beijing University of Posts and Telecommunications, Beijing, China – sequence: 2 givenname: Haiyong orcidid: 0000-0001-6827-4225 surname: Luo fullname: Luo, Haiyong email: yhluo@ict.ac.cn organization: Research Center for Ubiquitous Computing Systems, Institute of Computing Technology Chinese Academy of Sciences, Beijing, China – sequence: 3 givenname: Runze orcidid: 0000-0002-6599-7898 surname: Chen fullname: Chen, Runze email: chenrz925@bupt.edu.cn organization: School of Software Engineering, Beijing University of Posts and Telecommunications, Beijing, China – sequence: 4 givenname: Fang orcidid: 0000-0002-4784-5778 surname: Zhao fullname: Zhao, Fang email: zfsse@bupt.edu.cn organization: School of Software Engineering, Beijing University of Posts and Telecommunications, Beijing, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/37402195$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kUtPGzEUha0KVJ5_oEKVl2wm-DHj8XSXBmgqRalUQOrOcjx3IrczNrU9IP59HRIqxAJv7Ht8vitfnyO057wDhD5RMqGUNBe3y-XiZsII4xPOZFWV9Qd0yKhgBeNS7v0_178O0GmMv0leglSibD6iA16XhNGmOkTrS6sH79olpC94ipcwBt0XuXr04U_xVUdo8RwSBL8GB36M-AZc9AFPUwKX7APg6zFa73CXxfk4aIenJus2PeGfYPza2ZSvT9B-p_sIp7v9GN1dX93O5sXix7fvs-miMJyVqWhXFWklF52AlTFGVtBVKyZF07CuJQQ001mRDZO6bkxtGkorXVJuaCeoKAk_RufbvvfB_x0hJjXYaKDv9fPrFZOci7KmsszWzzvruBqgVffBDjo8qZfPyQa5NZjgYwzQKWOT3kyTgra9okRtolDPUahNFGoXRUbZG_Sl-7vQ2RayAPAKoDXLc_J_pRWVHw |
| CODEN | ITNNAL |
| CitedBy_id | crossref_primary_10_1109_JSEN_2024_3443308 crossref_primary_10_1109_JSEN_2025_3561418 crossref_primary_10_1109_TNNLS_2025_3556317 crossref_primary_10_3390_s25134028 crossref_primary_10_1109_JSEN_2025_3543928 |
| Cites_doi | 10.1109/MPRV.2008.40 10.24963/ijcai.2019/779 10.24963/ijcai.2018/432 10.1145/1964897.1964918 10.1145/3267305.3267531 10.1145/3380999 10.1109/ISMS.2016.51 10.1109/TNNLS.2020.2978942 10.24963/ijcai.2019/801 10.1016/j.jpdc.2017.05.007 10.1109/TBDATA.2020.2988778 10.1007/BF00994018 10.24963/ijcai.2019/186 10.1109/JIOT.2018.2823084 10.1109/ICCV.2017.612 10.1145/3397323 10.1145/3341162.3345571 10.1109/TNNLS.2019.2927224 10.29172/7c2a6982-6d72-4cd8-bba6-2fccb06a7011 10.1109/TBDATA.2021.3090905 10.1109/RFID.2013.6548154 10.1109/CVPR.2018.00745 10.1145/3090076 10.3390/s16010115 10.1145/3341162.3345570 10.1109/ISWC.2012.13 10.1109/TKDE.2022.3176466 10.3390/s17030529 10.1016/j.inffus.2017.05.004 10.1016/j.asoc.2017.09.027 10.1145/2939672.2939785 10.1145/3410530.3414349 10.1016/j.neucom.2015.07.085 10.24963/ijcai.2019/431 10.1109/TMI.2017.2715284 10.1145/3411836 10.1109/CVPR42600.2020.00269 10.1109/CVPR46437.2021.01049 10.1109/TCYB.2019.2905157 10.1145/3550331 10.1109/BigMM.2019.00026 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
| DOI | 10.1109/TNNLS.2023.3285547 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic MEDLINE |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library (IEL) (UW System Shared) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2162-2388 |
| EndPage | 15331 |
| ExternalDocumentID | 37402195 10_1109_TNNLS_2023_3285547 10172911 |
| Genre | orig-research Research Support, Non-U.S. Gov't Journal Article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62261042; 62002026 funderid: 10.13039/501100001809 – fundername: BUPT Excellent Ph.D. Students Foundation grantid: CX2020220 – fundername: Key Research Projects of the Joint Research Fund for Beijing Natural Science Foundation and the Fengtai Rail Transit Frontier Research Joint Fund grantid: L221003 – fundername: Strategic Priority Research Program of Chinese Academy of Sciences grantid: XDA28040500 funderid: 10.13039/501100002367 – fundername: Fundamental Research Funds for the Central Universities grantid: 2022RC13 funderid: 10.13039/501100012226 – fundername: Open Project of the Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences – fundername: Beijing Natural Science Foundation grantid: 4232035; 4212024; 4222034 funderid: 10.13039/501100004826 – fundername: National Key Research and Development Program of China; National Key Research and Development Program grantid: 2022YFB3904700 funderid: 10.13039/501100012166 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION CGR CUY CVF ECM EIF NPM RIG 7X8 |
| ID | FETCH-LOGICAL-c324t-db50d836f6ebccc85ef5b286992fd00ea2aef58928a79c7c9115a413c1f616403 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 9 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001025578500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2162-237X 2162-2388 |
| IngestDate | Mon Sep 29 06:19:17 EDT 2025 Thu Jan 02 22:22:28 EST 2025 Sat Nov 29 01:40:26 EST 2025 Tue Nov 18 21:39:56 EST 2025 Wed Aug 27 02:33:14 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Issue | 11 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c324t-db50d836f6ebccc85ef5b286992fd00ea2aef58928a79c7c9115a413c1f616403 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0001-6827-4225 0000-0002-4784-5778 0000-0002-6599-7898 0000-0001-8643-9150 |
| PMID | 37402195 |
| PQID | 2833647184 |
| PQPubID | 23479 |
| PageCount | 11 |
| ParticipantIDs | pubmed_primary_37402195 proquest_miscellaneous_2833647184 crossref_citationtrail_10_1109_TNNLS_2023_3285547 crossref_primary_10_1109_TNNLS_2023_3285547 ieee_primary_10172911 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-11-01 |
| PublicationDateYYYYMMDD | 2024-11-01 |
| PublicationDate_xml | – month: 11 year: 2024 text: 2024-11-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transaction on neural networks and learning systems |
| PublicationTitleAbbrev | TNNLS |
| PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
| PublicationYear | 2024 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref13 ref12 ref15 Liu (ref48) 2020; 4 ref11 ref10 ref16 ref19 ref18 Kingma (ref47) ref46 ref42 ref44 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 Klambauer (ref39) Yang (ref14) ref35 ref34 ref37 ref36 ref31 Glorot (ref43) ref30 ref33 ref32 ref2 Ioffe (ref40) ref1 Anguita (ref45) ref38 Yang (ref17) ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 Veličković (ref41) |
| References_xml | – ident: ref16 doi: 10.1109/MPRV.2008.40 – ident: ref34 doi: 10.24963/ijcai.2019/779 – ident: ref19 doi: 10.24963/ijcai.2018/432 – ident: ref13 doi: 10.1145/1964897.1964918 – ident: ref25 doi: 10.1145/3267305.3267531 – volume: 4 start-page: 1 issue: 1 year: 2020 ident: ref48 article-title: GlobalFusion: A global attentional deep learning framework for multisensor information fusion publication-title: Proc. ACM Interact., Mobile, Wearable Ubiquitous Technol. doi: 10.1145/3380999 – ident: ref22 doi: 10.1109/ISMS.2016.51 – ident: ref5 doi: 10.1109/TNNLS.2020.2978942 – ident: ref27 doi: 10.24963/ijcai.2019/801 – start-page: 1 volume-title: Proc. 6th Int. Conf. Learn. Represent. (ICLR) ident: ref41 article-title: Graph attention networks – ident: ref23 doi: 10.1016/j.jpdc.2017.05.007 – ident: ref31 doi: 10.1109/TBDATA.2020.2988778 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. (ICLR) ident: ref47 article-title: Adam: A method for stochastic optimization – ident: ref11 doi: 10.1007/BF00994018 – ident: ref33 doi: 10.24963/ijcai.2019/186 – start-page: 315 volume-title: Proc. AISTATS ident: ref43 article-title: Deep sparse rectifier neural networks – start-page: 972 volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst. ident: ref39 article-title: Self-normalizing neural networks – ident: ref4 doi: 10.1109/JIOT.2018.2823084 – ident: ref37 doi: 10.1109/ICCV.2017.612 – ident: ref35 doi: 10.1145/3397323 – ident: ref8 doi: 10.1145/3341162.3345571 – ident: ref28 doi: 10.1109/TNNLS.2019.2927224 – ident: ref10 doi: 10.29172/7c2a6982-6d72-4cd8-bba6-2fccb06a7011 – ident: ref30 doi: 10.1109/TBDATA.2021.3090905 – ident: ref2 doi: 10.1109/RFID.2013.6548154 – ident: ref42 doi: 10.1109/CVPR.2018.00745 – ident: ref18 doi: 10.1145/3090076 – ident: ref20 doi: 10.3390/s16010115 – ident: ref24 doi: 10.1145/3341162.3345570 – ident: ref44 doi: 10.1109/ISWC.2012.13 – ident: ref32 doi: 10.1109/TKDE.2022.3176466 – ident: ref15 doi: 10.3390/s17030529 – ident: ref1 doi: 10.1016/j.inffus.2017.05.004 – start-page: 448 volume-title: Proc. IEEE Conf. Int. Conf. Mach. Learn. (ICML) ident: ref40 article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift – ident: ref26 doi: 10.1016/j.asoc.2017.09.027 – ident: ref12 doi: 10.1145/2939672.2939785 – ident: ref3 doi: 10.1145/3410530.3414349 – ident: ref46 doi: 10.1016/j.neucom.2015.07.085 – start-page: 3995 volume-title: Proc. Int. Joint Conf. Artif. Intell. (IJCAI) ident: ref17 article-title: Deep convolutional neural networks on multichannel time series for human activity recognition – ident: ref21 doi: 10.24963/ijcai.2019/431 – ident: ref38 doi: 10.1109/TMI.2017.2715284 – ident: ref7 doi: 10.1145/3411836 – ident: ref9 doi: 10.1109/CVPR42600.2020.00269 – ident: ref6 doi: 10.1109/CVPR46437.2021.01049 – start-page: 437 volume-title: Proc. 21st Eur. Symp. Artif. Neural Netw., Comput. Intell. Mach. Learn. (ESANN) ident: ref45 article-title: A public domain dataset for human activity recognition using smartphones – ident: ref29 doi: 10.1109/TCYB.2019.2905157 – ident: ref49 doi: 10.1145/3550331 – ident: ref36 doi: 10.1109/BigMM.2019.00026 – start-page: 20 volume-title: Proc. Int. Joint Conf. Artif. Intell. (IJCAI) ident: ref14 article-title: Activity recognition: Linking low-level sensors to high-level intelligence |
| SSID | ssj0000605649 |
| Score | 2.5021267 |
| Snippet | With the proliferation of intelligent sensors integrated into mobile devices, fine-grained human activity recognition (HAR) based on lightweight sensors has... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 15321 |
| SubjectTerms | Adaptation models Algorithms Attention Convolutional denoising autoencoders (CDAEs) Convolutional neural networks Correlation Deep Learning Feature extraction global-attention mechanism Graph convolutional networks Human Activities - classification Human activity recognition human activity recognition (HAR) Humans multisensor modality Multisensor systems Neural Networks, Computer Noise reduction Pattern Recognition, Automated - methods self-attention mechanism |
| Title | DiamondNet: A Neural-Network-Based Heterogeneous Sensor Attentive Fusion for Human Activity Recognition |
| URI | https://ieeexplore.ieee.org/document/10172911 https://www.ncbi.nlm.nih.gov/pubmed/37402195 https://www.proquest.com/docview/2833647184 |
| Volume | 35 |
| WOSCitedRecordID | wos001025578500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE/IET Electronic Library (IEL) (UW System Shared) customDbUrl: eissn: 2162-2388 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000605649 issn: 2162-237X databaseCode: RIE dateStart: 20120101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwELYo4tALtJS22wcyUm-VF8d5OO5teaw4oAgBrfYW-TFBSGhTLVl-f2ecZMUFpF6iKLITxzPWfGPPzMfYjzLJdfBQCKetFlkwRjhweAkKDVDuQhOLPf-51FVVLhbmakhWj7kwABCDz2BKt_EsP7R-TVtlx6Q-ylAm7xutiz5Za7OhIhGYFxHuqqRQQqV6MSbJSHN8W1WXN1PiCp-mikKziH0v1eg9JUQt8cwmRZKVl_FmtDvzvf8c8Tu2OwBMPus14j3bguU-2xvJG_iwlj-wu7N7IhoKFXS_-IxTkQ77IKo-KlycoHEL_IJiZVpUMWjXj_wGPd52xWddRxFGT8Dna9pq4wh7eTwL4DPfc1Hw6zEsqV0esN_z89vTCzGwLgiP4KoTweUylGnRFOC892UOTe5UWRijmiAlWGXxSWlUabXx2uPf5RZNoU-aAn0vmX5k28t2CZ8Zz6XVTRqMc0nIABBZpCG3kEnnM-sTOWHJOO-1H0qSEzPGQx1dE2nqKLaaxFYPYpuwn5s-f_uCHK-2PiChPGvZy2PCjkb51rie6JDExrmsEW5RSX10fCfsUy_4Te9RX7688Nav7C1-POtTFb-x7W61hu9sxz9194-rQ1TaRXkYlfYf3AbmyQ |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEBYlLbSXpI803T5V6K14I8uWZfW2fSxbujWl2Za9GT3GJRDWYePN7--MbC-5pNCLMUYykmbEfCPNzMfYuzJVOngoEqetTvJgTOLA4SNINEDKhSYWe_691FVVrtfmx5CsHnNhACAGn8GUXuNdfmj9jo7KTkl9pKFM3rsqz6Xo07X2RyoCoXkRAa9MC5nITK_HNBlhTldVtTybElv4NJMUnEX8e5lG_yklcokbVinSrNyOOKPlmR_955gfssMBYvJZrxOP2B3YPGZHI30DH3bzE_bn8zlRDYUKug98xqlMh71Iqj4uPPmI5i3wBUXLtKhk0O6u-Bn6vO2Wz7qOYoyugc93dNjGEfjyeBvAZ75no-A_x8CkdnPMfs2_rD4tkoF3IfEIr7okOCVCmRVNAc57XypolJNlYYxsghBgpcUvpZGl1cZrj7NTFo2hT5sCvS-RPWUHm3YDzxhXwuomC8a5NOQAiC2yoCzkwvnc-lRMWDque-2HouTEjXFRR-dEmDqKrSax1YPYJuz9vs9lX5Ljn62PSSg3WvbymLC3o3xr3FF0TWLjWtYIuKioPrq-E3bSC37fe9SX57f89Q27v1h9X9bLr9W3F-wBDiTvExdfsoNuu4NX7J6_7s6vtq-j6v4FkxvpKA |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DiamondNet%3A+A+Neural-Network-Based+Heterogeneous+Sensor+Attentive+Fusion+for+Human+Activity+Recognition&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Zhu%2C+Yida&rft.au=Luo%2C+Haiyong&rft.au=Chen%2C+Runze&rft.au=Zhao%2C+Fang&rft.date=2024-11-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=35&rft.issue=11&rft.spage=15321&rft.epage=15331&rft_id=info:doi/10.1109%2FTNNLS.2023.3285547&rft_id=info%3Apmid%2F37402195&rft.externalDocID=10172911 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |