Multi-Modal Cross Learning for an FMCW Radar Assisted by Thermal and RGB Cameras to Monitor Gestures and Cooking Processes
This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approa...
Saved in:
| Published in: | IEEE access Vol. 9; pp. 22295 - 22303 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Piscataway
IEEE
2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 2169-3536, 2169-3536 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task. |
|---|---|
| AbstractList | This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task. This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, [Formula Omitted] cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a [Formula Omitted] cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task. This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, 2 × 2 cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a 2 × 2 cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task. |
| Author | Ott, Peter Stache, Nicolaj C. Altmann, Marco Waldschmidt, Christian |
| Author_xml | – sequence: 1 givenname: Marco orcidid: 0000-0001-7118-209X surname: Altmann fullname: Altmann, Marco email: marco.altmann@hs-heilbronn.de organization: Institute of Automotive Engineering and Mechatronics, Heilbronn University of Applied Sciences, Heilbronn, Germany – sequence: 2 givenname: Peter orcidid: 0000-0003-3513-4167 surname: Ott fullname: Ott, Peter organization: Institute of Automotive Engineering and Mechatronics, Heilbronn University of Applied Sciences, Heilbronn, Germany – sequence: 3 givenname: Nicolaj C. orcidid: 0000-0002-6308-0146 surname: Stache fullname: Stache, Nicolaj C. organization: Institute of Automotive Engineering and Mechatronics, Heilbronn University of Applied Sciences, Heilbronn, Germany – sequence: 4 givenname: Christian orcidid: 0000-0003-2090-6136 surname: Waldschmidt fullname: Waldschmidt, Christian organization: Institute of Microwave Engineering, Ulm University, Ulm, Germany |
| BookMark | eNp9kUFv3CAQhVGVSk3T_IJckHr2Fhuw4bi1km2kXSVKUvWIMIxTtl5IgT0kv77sOq2qHsoFNLzvaWbee3TigweELmqyqGsiPy37_vL-ftGQpl5QwlvRiTfotKlbWVFO25O_3u_QeUpbUo4oJd6dopfNfsqu2gSrJ9zHkBJeg47e-Uc8hoi1x1eb_hu-01ZHvEzJpQwWD8_44TvEXYG0t_hu9Rn3egdRJ5wD3gTvcoFXkPI-Qjpq-hB-HFxvYzCQEqQP6O2opwTnr_cZ-np1-dB_qdY3q-t-ua4MIyJXY8Ot5YyxdhhbAM6lJEC4JFZYOnaUy45INnAgQC21A9GiM7JpjRlIy8aGnqHr2dcGvVVP0e10fFZBO3UshPiodMzOTKA4gLBDVwtmGTODlZ0eGzZQaSWh4uj1cfZ6iuHnvoyntmEffWlfNUyIjrWNoEUlZ5U5LDTCqIzLOrvgc9RuUjVRh-TUnJw6JKdekyss_Yf93fH_qYuZcgDwh5CUlW9OfwHW1KWu |
| CODEN | IAECCG |
| CitedBy_id | crossref_primary_10_1109_TMTT_2022_3148403 crossref_primary_10_1007_s12652_023_04606_9 crossref_primary_10_1109_ACCESS_2023_3243854 crossref_primary_10_1109_JSEN_2023_3344789 crossref_primary_10_1109_JSEN_2024_3426030 crossref_primary_10_1007_s10489_022_04258_w crossref_primary_10_1016_j_iot_2024_101456 crossref_primary_10_1007_s13735_025_00363_x crossref_primary_10_1109_TIM_2023_3253906 crossref_primary_10_1088_1742_6596_1948_1_012098 |
| Cites_doi | 10.1145/2984511.2984565 10.1038/nature14539 10.1109/TPAMI.2015.2461544 10.23919/EuRAD.2018.8546560 10.1109/TNNLS.2020.3029181 10.1007/978-3-642-15825-4_10 10.1109/JSYST.2020.3001680 10.1007/s11263-015-0816-y 10.1109/ICIP.2018.8451372 10.1109/CVPR.2017.327 10.1109/TASE.2018.2865000 10.1109/CVPR.2019.00126 10.5194/gmd-7-1247-2014 10.1109/LRA.2020.2970679 10.1109/ICRA.2016.7487708 10.1109/TIFS.2019.2921454 10.1109/CVPR.2017.720 10.1109/TPAMI.2016.2572683 10.1109/RADAR42522.2020.9114871 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| DBID | 97E ESBDL RIA RIE AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D DOA |
| DOI | 10.1109/ACCESS.2021.3056878 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE Xplore Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts METADEX Technology Research Database Materials Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef Materials Research Database Engineered Materials Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace METADEX Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Materials Research Database |
| Database_xml | – sequence: 1 dbid: DOA name: Directory of Open Access Journals (DOAJ) url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library (IEL) (UW System Shared) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 2169-3536 |
| EndPage | 22303 |
| ExternalDocumentID | oai_doaj_org_article_5ee8db7184d44cbd97af24b39d9038f2 10_1109_ACCESS_2021_3056878 9345685 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: European Regional Development Fund (EFRE) and the Ministry for Science, Research, and Arts Baden-Württemberg within the Project ZAFH MikroSens funderid: 10.13039/501100008530 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR ABAZT ABVLG ACGFS ADBBV AGSQL ALMA_UNASSIGNED_HOLDINGS BCNDV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD ESBDL GROUPED_DOAJ IPLJI JAVBF KQ8 M43 M~E O9- OCL OK1 RIA RIE RNS AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c408t-f25dd54446bf6ee55990e0590d8d3f73597094b5e0e3d3db0a87c926ccb064f23 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 10 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000617369700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2169-3536 |
| IngestDate | Fri Oct 03 12:48:43 EDT 2025 Sun Nov 30 04:24:56 EST 2025 Tue Nov 18 22:52:10 EST 2025 Sat Nov 29 06:11:56 EST 2025 Wed Aug 27 05:45:08 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://creativecommons.org/licenses/by/4.0/legalcode |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c408t-f25dd54446bf6ee55990e0590d8d3f73597094b5e0e3d3db0a87c926ccb064f23 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0003-2090-6136 0000-0003-3513-4167 0000-0001-7118-209X 0000-0002-6308-0146 |
| OpenAccessLink | https://doaj.org/article/5ee8db7184d44cbd97af24b39d9038f2 |
| PQID | 2488746283 |
| PQPubID | 4845423 |
| PageCount | 9 |
| ParticipantIDs | crossref_citationtrail_10_1109_ACCESS_2021_3056878 proquest_journals_2488746283 crossref_primary_10_1109_ACCESS_2021_3056878 ieee_primary_9345685 doaj_primary_oai_doaj_org_article_5ee8db7184d44cbd97af24b39d9038f2 |
| PublicationCentury | 2000 |
| PublicationDate | 20210000 2021-00-00 20210101 2021-01-01 |
| PublicationDateYYYYMMDD | 2021-01-01 |
| PublicationDate_xml | – year: 2021 text: 20210000 |
| PublicationDecade | 2020 |
| PublicationPlace | Piscataway |
| PublicationPlace_xml | – name: Piscataway |
| PublicationTitle | IEEE access |
| PublicationTitleAbbrev | Access |
| PublicationYear | 2021 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref12 ref11 ref10 ngiam (ref13) 2011 ref2 ref1 ref17 ref19 ref18 hügler (ref28) 2017 bank (ref15) 2020 kingma (ref22) 2015 trommel (ref3) 2016 fried (ref14) 2013 simonyan (ref20) 2015 ref24 ref26 ref25 glorot (ref23) 2010; 9 ref27 ref29 ref8 ref7 ref9 ref4 ref6 ref5 nair (ref21) 2010 ronneberger (ref16) 2015 |
| References_xml | – year: 2015 ident: ref16 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Medical Image Computing and Computer-Assisted Intervention – ident: ref4 doi: 10.1145/2984511.2984565 – ident: ref2 doi: 10.1038/nature14539 – ident: ref7 doi: 10.1109/TPAMI.2015.2461544 – ident: ref19 doi: 10.23919/EuRAD.2018.8546560 – start-page: 531 year: 2013 ident: ref14 article-title: Cross-modal sound mapping using deep learning publication-title: Proc Int Conf New Interfaces Musical Exp – start-page: 1 year: 2015 ident: ref20 article-title: Very deep convolutional networks for large-scale image recognition: VGGNet publication-title: Proc 3rd Int Conf Learn Represent (ICLR) – ident: ref10 doi: 10.1109/TNNLS.2020.3029181 – ident: ref18 doi: 10.1007/978-3-642-15825-4_10 – ident: ref8 doi: 10.1109/JSYST.2020.3001680 – ident: ref1 doi: 10.1007/s11263-015-0816-y – ident: ref25 doi: 10.1109/ICIP.2018.8451372 – ident: ref11 doi: 10.1109/CVPR.2017.327 – ident: ref12 doi: 10.1109/TASE.2018.2865000 – start-page: 1 year: 2015 ident: ref22 article-title: Adam: A method for stochastic optimization publication-title: Proc 3rd Int Conf Learn Represent – year: 2020 ident: ref15 article-title: Autoencoders publication-title: arXiv 2003 05991 – ident: ref6 doi: 10.1109/CVPR.2019.00126 – ident: ref24 doi: 10.5194/gmd-7-1247-2014 – ident: ref27 doi: 10.1109/LRA.2020.2970679 – ident: ref9 doi: 10.1109/ICRA.2016.7487708 – ident: ref29 doi: 10.1109/TIFS.2019.2921454 – start-page: 1 year: 2011 ident: ref13 article-title: Multimodal deep learning publication-title: Proc 28th Int Conf Mach Learn (ICML) – year: 2016 ident: ref3 article-title: Multi-target human gait classification using deep convolutional neural networks on micro-Doppler spectrograms publication-title: Proc 13th Eur Radar Conf – ident: ref26 doi: 10.1109/CVPR.2017.720 – ident: ref17 doi: 10.1109/TPAMI.2016.2572683 – start-page: 460 year: 2017 ident: ref28 article-title: Radar as an emerging and growing technology for industrial applications: A short overview publication-title: Proc AMA Conf – start-page: 1 year: 2010 ident: ref21 article-title: Rectified linear units improve restricted Boltzmann machines publication-title: Proc 27th Int Conf Mach Learn – ident: ref5 doi: 10.1109/RADAR42522.2020.9114871 – volume: 9 start-page: 249 year: 2010 ident: ref23 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: Proc 13th Int Conf Artif Intell Statist |
| SSID | ssj0000816957 |
| Score | 2.2580566 |
| Snippet | This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal... |
| SourceID | doaj proquest crossref ieee |
| SourceType | Open Website Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 22295 |
| SubjectTerms | autoencoder Cameras Classification Coders Cooking cross learning Decoding Gesture recognition Learning Machine learning modality hallucination multimodal sensors Neural networks Occupancy Radar radar applications Radar imaging Radar range range-doppler Sensors Task analysis thermal camera Training User interfaces |
| SummonAdditionalLinks | – databaseName: IEEE/IET Electronic Library dbid: RIE link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LaxUxFD60xYUufFXxapUsXHba3DxukmU7eOumRYpidyFPEcq95T4E--t7kkmHgiK4G4aTkPAl5zVzvgPw0UuvjIy8o964TgipOhOC7rRzbqb8VDo_NJtQFxf66sp82YHDsRYmpVR_PktH5bF-y4_LsC2psmPD0dxruQu7Ss2GWq0xn1IaSBipGrHQlJrjk77HPWAIyKZHxVHWpZXaA-NTOfpbU5U_NHE1L_Nn_7ew5_C0uZHkZMD9BeykxUt48oBccB9ua21td76MKNiX9ZBGpvqDoKdK3ILMz_vv5NJFtyIIUwE8Ev-b4NFBdX2NEpFcnp2S3pXM1ZpslmRQAStyhlvYYqReZfqSA8ZZW9FBWr-Cb_NPX_vPXeu00AVB9abLTMYoBYaGPs9SKixkNJWy1Kgjz4pj1IFhoJeJJh559NRpFQybheDRpcmMv4a9xXKR3gDhTrKYFUKftTAx-4BqjHIfQsCZ8nQC7B4CGxoNeemGcW1rOEKNHXCzBTfbcJvA4TjoZmDh-Lf4acF2FC0U2vUFgmbbjbQyJR09mmYRhQg-GuUyE56baCjXmU1gvwA9TtIwnsDB_Umx7bqvLUM1qEqVL3_791Hv4HFZ4JC7OYC9zWqb3sOj8Gvzc736UE_yHTTL8WQ priority: 102 providerName: IEEE |
| Title | Multi-Modal Cross Learning for an FMCW Radar Assisted by Thermal and RGB Cameras to Monitor Gestures and Cooking Processes |
| URI | https://ieeexplore.ieee.org/document/9345685 https://www.proquest.com/docview/2488746283 https://doaj.org/article/5ee8db7184d44cbd97af24b39d9038f2 |
| Volume | 9 |
| WOSCitedRecordID | wos000617369700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: Directory of Open Access Journals (DOAJ) customDbUrl: eissn: 2169-3536 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000816957 issn: 2169-3536 databaseCode: DOA dateStart: 20130101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2169-3536 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000816957 issn: 2169-3536 databaseCode: M~E dateStart: 20130101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07axwxEBbBpIiLkPhBLnGMCpfZWLuSTlJpLz67OROMTdwJSSOZgLkLd2eDU-S3Z6SVj4NA0rjZYpnV6xuNZsTON4QceemVkcAb5o1rhJCqMSHoRjvnxsq30vmh2IS6vNS3t-bbRqmv_E_YQA88LNyxjFGDRwsqQIjgwSiXOuG5AcO4TsX6MmU2gqlig3U7NlJVmqGWmeOTvscZYUDYtV-z26xzYbWNo6gw9tcSK3_Z5XLYTN6Rt9VLpCfD6N6TV3G2Q7Y3uAN3ya-SOttM54CCfe6AVq7UO4qOKHUzOpn23-mVA7egiELGE6h_oqgZaI3vUQLo1fkp7V2-mFrS1ZwOO3xBz3FMDxiIF5k-X_FiqzWnIC73yM3k7Lq_aGohhSYIpldN6iSAFBj5-TSOMZOMsZizTkEDT4pjUIFRnpeRRQ4cPHNaBdONQ_DosaSO75Ot2XwWPxDKnewgKUQ2aWEg-YBWinEfQsCWUjsi3fOa2lBZxnOxi3tbog1m7ACEzUDYCsSIfFl_9HMg2fi3-GkGay2aGbLLC9QbW_XG_k9vRmQ3Q71uxHB0JbUckYNn6G3dzUvboZVTOYmXf3yJrj-RN3k6w0XOAdlaLR7iZ_I6PK5-LBeHRZHxOf19dljSEf8A27T3AA |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1baxQxFD7UKqgP3qp0tWoefOy02Vw2yWM7uK3YXaRU7FvItQhlt-xF0F_vyUy6FBTBt2E4CZn5knObOd8B-OClV0ZG3lBvXCOEVI0JQTfaOTdSfiid75tNqOlUX16aL1uwv6mFSSl1P5-lg3LZfcuP87AuqbJDw9Hca3kP7kshGO2rtTYZldJCwkhVqYWG1BwetS0-BQaBbHhQXGVdmqndMT8dS39tq_KHLu4MzPjp_y3tGTypjiQ56pF_Dltp9gIe36EX3IFfXXVtM5lHFGzLekilU70i6KsSNyPjSfuNnLvoFgSBKpBH4n8S3DyosK9RIpLzk2PSupK7WpLVnPRKYEFO8BHWGKt3Mm3JAuOstewgLV_C1_HHi_a0qb0WmiCoXjWZyRjxrYqRz6OUCg8ZTaUwNerIs-IYd2Ag6GWiiUcePXVaBcNGIXh0ajLjr2B7Np-lXSDcSRazQvCzFiZmH1CRUe5DCDhTHg6A3UJgQyUiL_0wrm0XkFBje9xswc1W3Aawvxl00_Nw_Fv8uGC7ES0k2t0NBM3WM2llSjp6NM4iChF8NMplJjw30VCuMxvATgF6M0nFeAB7tzvF1gO_tAwVoSp1vvz130e9h4enF5Mze_Zp-vkNPCqL7TM5e7C9WqzTW3gQfqy-Lxfvul39G-SH9Ks |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Modal+Cross+Learning+for+an+FMCW+Radar+Assisted+by+Thermal+and+RGB+Cameras+to+Monitor+Gestures+and+Cooking+Processes&rft.jtitle=IEEE+access&rft.au=Altmann%2C+Marco&rft.au=Ott%2C+Peter&rft.au=Stache%2C+Nicolaj+C.&rft.au=Waldschmidt%2C+Christian&rft.date=2021&rft.issn=2169-3536&rft.eissn=2169-3536&rft.volume=9&rft.spage=22295&rft.epage=22303&rft_id=info:doi/10.1109%2FACCESS.2021.3056878&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_ACCESS_2021_3056878 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon |