Multi-Modal Cross Learning for an FMCW Radar Assisted by Thermal and RGB Cameras to Monitor Gestures and Cooking Processes

This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approa...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 9; pp. 22295 - 22303
Main Authors: Altmann, Marco, Ott, Peter, Stache, Nicolaj C., Waldschmidt, Christian
Format: Journal Article
Language:English
Published: Piscataway IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2169-3536, 2169-3536
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task.
AbstractList This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, [Formula Omitted] cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a [Formula Omitted] cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task.
This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a <inline-formula> <tex-math notation="LaTeX">2\times 2 </tex-math></inline-formula> cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task.
This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal during training (i.e., radar Range-Doppler maps, thermal camera images, and RGB camera images are used for training). In inference, the approach is single-modal (i.e., only radar Range-Doppler maps are needed for classification). The proposed approach uses a multi-modal autoencoder training which creates a compressed data representation containing correlated features across modalities. The encoder part is then used as a pretrained network for the classification task. The benefits are that expensive sensors like high resolution thermal cameras are not needed in the application but a higher classification accuracy is achieved because of the multi-modal cross learning during training. The autoencoders can also be used to generate hallucinated data of the absent sensors. The hallucinated data can be used for user interfaces, a further classification, or other tasks. The proposed approach is verified within a simultaneous cooking process classification, 2 × 2 cooktop occupancy detection, and gesture recognition task. The main functionality is an overboil protection and gesture control of a 2 × 2 cooktop. The multi-modal cross learning approach considerably outperforms single-modal approaches on that challenging classification task.
Author Ott, Peter
Stache, Nicolaj C.
Altmann, Marco
Waldschmidt, Christian
Author_xml – sequence: 1
  givenname: Marco
  orcidid: 0000-0001-7118-209X
  surname: Altmann
  fullname: Altmann, Marco
  email: marco.altmann@hs-heilbronn.de
  organization: Institute of Automotive Engineering and Mechatronics, Heilbronn University of Applied Sciences, Heilbronn, Germany
– sequence: 2
  givenname: Peter
  orcidid: 0000-0003-3513-4167
  surname: Ott
  fullname: Ott, Peter
  organization: Institute of Automotive Engineering and Mechatronics, Heilbronn University of Applied Sciences, Heilbronn, Germany
– sequence: 3
  givenname: Nicolaj C.
  orcidid: 0000-0002-6308-0146
  surname: Stache
  fullname: Stache, Nicolaj C.
  organization: Institute of Automotive Engineering and Mechatronics, Heilbronn University of Applied Sciences, Heilbronn, Germany
– sequence: 4
  givenname: Christian
  orcidid: 0000-0003-2090-6136
  surname: Waldschmidt
  fullname: Waldschmidt, Christian
  organization: Institute of Microwave Engineering, Ulm University, Ulm, Germany
BookMark eNp9kUFv3CAQhVGVSk3T_IJckHr2Fhuw4bi1km2kXSVKUvWIMIxTtl5IgT0kv77sOq2qHsoFNLzvaWbee3TigweELmqyqGsiPy37_vL-ftGQpl5QwlvRiTfotKlbWVFO25O_3u_QeUpbUo4oJd6dopfNfsqu2gSrJ9zHkBJeg47e-Uc8hoi1x1eb_hu-01ZHvEzJpQwWD8_44TvEXYG0t_hu9Rn3egdRJ5wD3gTvcoFXkPI-Qjpq-hB-HFxvYzCQEqQP6O2opwTnr_cZ-np1-dB_qdY3q-t-ua4MIyJXY8Ot5YyxdhhbAM6lJEC4JFZYOnaUy45INnAgQC21A9GiM7JpjRlIy8aGnqHr2dcGvVVP0e10fFZBO3UshPiodMzOTKA4gLBDVwtmGTODlZ0eGzZQaSWh4uj1cfZ6iuHnvoyntmEffWlfNUyIjrWNoEUlZ5U5LDTCqIzLOrvgc9RuUjVRh-TUnJw6JKdekyss_Yf93fH_qYuZcgDwh5CUlW9OfwHW1KWu
CODEN IAECCG
CitedBy_id crossref_primary_10_1109_TMTT_2022_3148403
crossref_primary_10_1007_s12652_023_04606_9
crossref_primary_10_1109_ACCESS_2023_3243854
crossref_primary_10_1109_JSEN_2023_3344789
crossref_primary_10_1109_JSEN_2024_3426030
crossref_primary_10_1007_s10489_022_04258_w
crossref_primary_10_1016_j_iot_2024_101456
crossref_primary_10_1007_s13735_025_00363_x
crossref_primary_10_1109_TIM_2023_3253906
crossref_primary_10_1088_1742_6596_1948_1_012098
Cites_doi 10.1145/2984511.2984565
10.1038/nature14539
10.1109/TPAMI.2015.2461544
10.23919/EuRAD.2018.8546560
10.1109/TNNLS.2020.3029181
10.1007/978-3-642-15825-4_10
10.1109/JSYST.2020.3001680
10.1007/s11263-015-0816-y
10.1109/ICIP.2018.8451372
10.1109/CVPR.2017.327
10.1109/TASE.2018.2865000
10.1109/CVPR.2019.00126
10.5194/gmd-7-1247-2014
10.1109/LRA.2020.2970679
10.1109/ICRA.2016.7487708
10.1109/TIFS.2019.2921454
10.1109/CVPR.2017.720
10.1109/TPAMI.2016.2572683
10.1109/RADAR42522.2020.9114871
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
ESBDL
RIA
RIE
AAYXX
CITATION
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
DOA
DOI 10.1109/ACCESS.2021.3056878
DatabaseName IEEE Xplore (IEEE)
IEEE Xplore Open Access Journals
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
METADEX
Technology Research Database
Materials Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Materials Research Database
Engineered Materials Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
METADEX
Computer and Information Systems Abstracts Professional
DatabaseTitleList Materials Research Database


Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2169-3536
EndPage 22303
ExternalDocumentID oai_doaj_org_article_5ee8db7184d44cbd97af24b39d9038f2
10_1109_ACCESS_2021_3056878
9345685
Genre orig-research
GrantInformation_xml – fundername: European Regional Development Fund (EFRE) and the Ministry for Science, Research, and Arts Baden-Württemberg within the Project ZAFH MikroSens
  funderid: 10.13039/501100008530
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
ABAZT
ABVLG
ACGFS
ADBBV
AGSQL
ALMA_UNASSIGNED_HOLDINGS
BCNDV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
ESBDL
GROUPED_DOAJ
IPLJI
JAVBF
KQ8
M43
M~E
O9-
OCL
OK1
RIA
RIE
RNS
AAYXX
CITATION
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c408t-f25dd54446bf6ee55990e0590d8d3f73597094b5e0e3d3db0a87c926ccb064f23
IEDL.DBID DOA
ISICitedReferencesCount 10
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000617369700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2169-3536
IngestDate Fri Oct 03 12:48:43 EDT 2025
Sun Nov 30 04:24:56 EST 2025
Tue Nov 18 22:52:10 EST 2025
Sat Nov 29 06:11:56 EST 2025
Wed Aug 27 05:45:08 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License https://creativecommons.org/licenses/by/4.0/legalcode
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c408t-f25dd54446bf6ee55990e0590d8d3f73597094b5e0e3d3db0a87c926ccb064f23
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-2090-6136
0000-0003-3513-4167
0000-0001-7118-209X
0000-0002-6308-0146
OpenAccessLink https://doaj.org/article/5ee8db7184d44cbd97af24b39d9038f2
PQID 2488746283
PQPubID 4845423
PageCount 9
ParticipantIDs crossref_citationtrail_10_1109_ACCESS_2021_3056878
proquest_journals_2488746283
crossref_primary_10_1109_ACCESS_2021_3056878
ieee_primary_9345685
doaj_primary_oai_doaj_org_article_5ee8db7184d44cbd97af24b39d9038f2
PublicationCentury 2000
PublicationDate 20210000
2021-00-00
20210101
2021-01-01
PublicationDateYYYYMMDD 2021-01-01
PublicationDate_xml – year: 2021
  text: 20210000
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE access
PublicationTitleAbbrev Access
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref12
ref11
ref10
ngiam (ref13) 2011
ref2
ref1
ref17
ref19
ref18
hügler (ref28) 2017
bank (ref15) 2020
kingma (ref22) 2015
trommel (ref3) 2016
fried (ref14) 2013
simonyan (ref20) 2015
ref24
ref26
ref25
glorot (ref23) 2010; 9
ref27
ref29
ref8
ref7
ref9
ref4
ref6
ref5
nair (ref21) 2010
ronneberger (ref16) 2015
References_xml – year: 2015
  ident: ref16
  article-title: U-Net: Convolutional networks for biomedical image segmentation
  publication-title: Medical Image Computing and Computer-Assisted Intervention
– ident: ref4
  doi: 10.1145/2984511.2984565
– ident: ref2
  doi: 10.1038/nature14539
– ident: ref7
  doi: 10.1109/TPAMI.2015.2461544
– ident: ref19
  doi: 10.23919/EuRAD.2018.8546560
– start-page: 531
  year: 2013
  ident: ref14
  article-title: Cross-modal sound mapping using deep learning
  publication-title: Proc Int Conf New Interfaces Musical Exp
– start-page: 1
  year: 2015
  ident: ref20
  article-title: Very deep convolutional networks for large-scale image recognition: VGGNet
  publication-title: Proc 3rd Int Conf Learn Represent (ICLR)
– ident: ref10
  doi: 10.1109/TNNLS.2020.3029181
– ident: ref18
  doi: 10.1007/978-3-642-15825-4_10
– ident: ref8
  doi: 10.1109/JSYST.2020.3001680
– ident: ref1
  doi: 10.1007/s11263-015-0816-y
– ident: ref25
  doi: 10.1109/ICIP.2018.8451372
– ident: ref11
  doi: 10.1109/CVPR.2017.327
– ident: ref12
  doi: 10.1109/TASE.2018.2865000
– start-page: 1
  year: 2015
  ident: ref22
  article-title: Adam: A method for stochastic optimization
  publication-title: Proc 3rd Int Conf Learn Represent
– year: 2020
  ident: ref15
  article-title: Autoencoders
  publication-title: arXiv 2003 05991
– ident: ref6
  doi: 10.1109/CVPR.2019.00126
– ident: ref24
  doi: 10.5194/gmd-7-1247-2014
– ident: ref27
  doi: 10.1109/LRA.2020.2970679
– ident: ref9
  doi: 10.1109/ICRA.2016.7487708
– ident: ref29
  doi: 10.1109/TIFS.2019.2921454
– start-page: 1
  year: 2011
  ident: ref13
  article-title: Multimodal deep learning
  publication-title: Proc 28th Int Conf Mach Learn (ICML)
– year: 2016
  ident: ref3
  article-title: Multi-target human gait classification using deep convolutional neural networks on micro-Doppler spectrograms
  publication-title: Proc 13th Eur Radar Conf
– ident: ref26
  doi: 10.1109/CVPR.2017.720
– ident: ref17
  doi: 10.1109/TPAMI.2016.2572683
– start-page: 460
  year: 2017
  ident: ref28
  article-title: Radar as an emerging and growing technology for industrial applications: A short overview
  publication-title: Proc AMA Conf
– start-page: 1
  year: 2010
  ident: ref21
  article-title: Rectified linear units improve restricted Boltzmann machines
  publication-title: Proc 27th Int Conf Mach Learn
– ident: ref5
  doi: 10.1109/RADAR42522.2020.9114871
– volume: 9
  start-page: 249
  year: 2010
  ident: ref23
  article-title: Understanding the difficulty of training deep feedforward neural networks
  publication-title: Proc 13th Int Conf Artif Intell Statist
SSID ssj0000816957
Score 2.2581553
Snippet This paper proposes a multi-modal cross learning approach to augment the neural network training phase by additional sensor data. The approach is multi-modal...
SourceID doaj
proquest
crossref
ieee
SourceType Open Website
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 22295
SubjectTerms autoencoder
Cameras
Classification
Coders
Cooking
cross learning
Decoding
Gesture recognition
Learning
Machine learning
modality hallucination
multimodal sensors
Neural networks
Occupancy
Radar
radar applications
Radar imaging
Radar range
range-doppler
Sensors
Task analysis
thermal camera
Training
User interfaces
SummonAdditionalLinks – databaseName: IEEE Electronic Library (IEL)
  dbid: RIE
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NaxUxEB_a4kEPflXxaZUcPHbb3SS7SY7t4quXFimKvYVNMhGhvFfeh2D_-k6y6VJQBG_LMgkJv0nmY3d-A_BRkRXwjZMV93VbyRCx0qbxVWuicbz1XmHIzSbUxYW-ujJfduBwqoVBxPzzGR6lx_wtPyz9NqXKjo0gc6_bXdhVqhtrtaZ8SmogYVpViIWa2hyf9D3tgUJA3hwlR1mnVmoPjE_m6C9NVf64ibN5mT_7v4U9h6fFjWQnI-4vYAcXL-HJA3LBfbjNtbXV-TKQYJ_WwwqZ6g9GniobFmx-3n9nl0MYVoxgSoAH5n4zUh26rq9JIrDLs1PWDylztWabJRuvgBU7oy1sKVLPMn3KAdOspegA16_g2_zT1_5zVTotVF7WelNF3obQSgoNXewQEwtZjaksNeggohIUdVAY6FqsUQQRXD1o5Q3vvHfk0kQuXsPeYrnAN8CUapC76OXQoXTGu6EhH4FUhQLDDpWcAb-HwPpCQ566YVzbHI7Uxo642YSbLbjN4HAadDOycPxb_DRhO4kmCu38gkCz5UTaFlEHR6ZZBim9C0YNkUsnTDC10JHPYD8BPU1SMJ7Bwb2m2HLc15bTNahSla94-_dR7-BxWuCYuzmAvc1qi-_hkf-1-blefciafAcMsPEw
  priority: 102
  providerName: IEEE
Title Multi-Modal Cross Learning for an FMCW Radar Assisted by Thermal and RGB Cameras to Monitor Gestures and Cooking Processes
URI https://ieeexplore.ieee.org/document/9345685
https://www.proquest.com/docview/2488746283
https://doaj.org/article/5ee8db7184d44cbd97af24b39d9038f2
Volume 9
WOSCitedRecordID wos000617369700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2169-3536
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000816957
  issn: 2169-3536
  databaseCode: DOA
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2169-3536
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000816957
  issn: 2169-3536
  databaseCode: M~E
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LSyQxEA4iHtzDsuouzq5KDh5tTefRSY7aOHoZEVH0FjqvZUFmlnkI7mF_u5V0HAYEvXjpQ1Od11epVIWurxA6lHAKuNryijoiKu5jqJSuXSV01JYK52TwudiEvLpSDw_6eqXUV_onrKcH7hfuRISgvAULyj3nznotu0i5ZdprwlTM1pdIvRJMZRus6kYLWWiGaqJPTtsWZgQBIa2Pk9usUmG1laMoM_aXEitv7HI-bIbf0NfiJeLTfnRbaC2Mt9GXFe7AHfQvp85Wo4kHwTZ1gAtX6m8Mjijuxng4au_xTee7KQYUEp4e22cMmgHW-BEkPL65OMNtly6mZng-wf0On-ILGNMCAvEs06YrXmi15BSE2Xd0Nzy_bS-rUkihcpyoeRWp8F5wiPxsbEJIJGMkpKxTrzyLkkFQAVGeFYEE5pm3pFPSado4Z8FjiZT9QOvjyTjsIixlHaiNjndN4FY729XgAoAmQNzXBMkHiL6uqXGFZTwVu3g0Odog2vRAmASEKUAM0NHyo789ycb74mcJrKVoYsjOL0BvTNEb85HeDNBOgnrZiGbgSioxQHuv0Juym2eGgpWTKYmX_fyMrn-hzTSd_iJnD63Pp4uwjzbc0_zPbHqQFRmeo__nBzkd8QWmZvbM
linkProvider Directory of Open Access Journals
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEB7StNDm0Fdasmna6tBjnMiybFnHxHST0uxSQkpzE5Y0DoWwG_ZRaH59RrJiAi2F3owZCdnfSPOw5xuAT4qsgMutzITjZSZ9h1mtc5eVutNWlM4p9LHZhJpO68tL_W0D9odaGESMP5_hQbiM3_L93K1DquxQF2Tu6_IRPC6lFLyv1hoyKqGFhC5VohbKuT48ahp6CgoCRX4QXOU6NFN7YH4iS39qq_LHWRwNzPjF_y3tJTxPjiQ76pF_BRs4ew1bD-gFt-E2Vtdmk7knwSashyU61StGviprZ2w8aX6w89a3C0ZABcg9s78ZKQ8d2Nck4dn5yTFr2pC7WrLVnPWHwIKd0COsKVaPMk3IAtOsqewAl2_g-_jzRXOapV4LmZO8XmWdKL2ntyor21WIgYeMYyhM9bUvOlVQ3EGBoC2RY-ELb3lbK6dF5Zwlp6YTxVvYnM1nuANMqRyF7ZxsK5RWO9vm5CWQslBoWKGSIxD3EBiXiMhDP4xrEwMSrk2Pmwm4mYTbCPaHQTc9D8e_xY8DtoNoINGONwg0k_akKRFrb8k4Sy-ls16rthPSFtprXtSdGMF2AHqYJGE8gr17TTFpwy-NoINQhTrfYvfvoz7C09OLyZk5-zL9-g6ehcX2mZw92Fwt1vgenrhfq5_LxYeo1Xe6EvR3
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Modal+Cross+Learning+for+an+FMCW+Radar+Assisted+by+Thermal+and+RGB+Cameras+to+Monitor+Gestures+and+Cooking+Processes&rft.jtitle=IEEE+access&rft.au=Altmann%2C+Marco&rft.au=Ott%2C+Peter&rft.au=Stache%2C+Nicolaj+C.&rft.au=Waldschmidt%2C+Christian&rft.date=2021&rft.issn=2169-3536&rft.eissn=2169-3536&rft.volume=9&rft.spage=22295&rft.epage=22303&rft_id=info:doi/10.1109%2FACCESS.2021.3056878&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_ACCESS_2021_3056878
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon