Enhancing transfer performance across datasets for brain-computer interfaces using a combination of alignment strategies and adaptive batch normalization

Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in brain-computer interfaces (BCIs). However, current TL and DL algorithms are usually validated within a single dataset, assuming that data of the test subj...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Journal of neural engineering Ročník 18; číslo 4
Hlavní autoři: Xu, Lichao, Xu, Minpeng, Ma, Zhen, Wang, Kun, Jung, Tzyy-Ping, Ming, Dong
Médium: Journal Article
Jazyk:angličtina
Vydáno: 01.08.2021
ISSN:1741-2552, 1741-2552
On-line přístup:Zjistit podrobnosti o přístupu
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in brain-computer interfaces (BCIs). However, current TL and DL algorithms are usually validated within a single dataset, assuming that data of the test subjects are acquired under the same condition as that of training (source) subjects. This assumption is generally violated in practice because of different acquisition systems and experimental settings across studies and datasets. Thus, the generalization ability of these algorithms needs further validations in a cross-dataset scenario, which is closer to the actual situation. This study compared the transfer performance of pre-trained deep-learning models with different preprocessing strategies in a cross-dataset scenario.Approach. This study used four publicly available motor imagery datasets, each was successively selected as a source dataset, and the others were used as target datasets. EEGNet and ShallowConvNet with four preprocessing strategies, namely channel normalization, trial normalization, Euclidean alignment, and Riemannian alignment, were trained with the source dataset. The transfer performance of pre-trained models was validated on the target datasets. This study also used adaptive batch normalization (AdaBN) for reducing interval covariate shift across datasets. This study compared the transfer performance of using the four preprocessing strategies and that of a baseline approach based on manifold embedded knowledge transfer (MEKT). This study also explored the possibility and performance of fusing MEKT and EEGNet.Main results. The results show that DL models with alignment strategies had significantly better transfer performance than the other two preprocessing strategies. As an unsupervised domain adaptation method, AdaBN could also significantly improve the transfer performance of DL models. The transfer performance of DL models that combined AdaBN and alignment strategies significantly outperformed MEKT. Moreover, the generalizability of EEGNet models that combined AdaBN and alignment strategies could be further improved via the domain adaptation step in MEKT, achieving the best generalization ability among multiple datasets (BNCI2014001: 0.788, PhysionetMI: 0.679, Weibo2014: 0.753, Cho2017: 0.650).Significance. The combination of alignment strategies and AdaBN could easily improve the generalizability of DL models without fine-tuning. This study may provide new insights into the design of transfer neural networks for BCIs by separating source and target batch normalization layers in the domain adaptation process.Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in brain-computer interfaces (BCIs). However, current TL and DL algorithms are usually validated within a single dataset, assuming that data of the test subjects are acquired under the same condition as that of training (source) subjects. This assumption is generally violated in practice because of different acquisition systems and experimental settings across studies and datasets. Thus, the generalization ability of these algorithms needs further validations in a cross-dataset scenario, which is closer to the actual situation. This study compared the transfer performance of pre-trained deep-learning models with different preprocessing strategies in a cross-dataset scenario.Approach. This study used four publicly available motor imagery datasets, each was successively selected as a source dataset, and the others were used as target datasets. EEGNet and ShallowConvNet with four preprocessing strategies, namely channel normalization, trial normalization, Euclidean alignment, and Riemannian alignment, were trained with the source dataset. The transfer performance of pre-trained models was validated on the target datasets. This study also used adaptive batch normalization (AdaBN) for reducing interval covariate shift across datasets. This study compared the transfer performance of using the four preprocessing strategies and that of a baseline approach based on manifold embedded knowledge transfer (MEKT). This study also explored the possibility and performance of fusing MEKT and EEGNet.Main results. The results show that DL models with alignment strategies had significantly better transfer performance than the other two preprocessing strategies. As an unsupervised domain adaptation method, AdaBN could also significantly improve the transfer performance of DL models. The transfer performance of DL models that combined AdaBN and alignment strategies significantly outperformed MEKT. Moreover, the generalizability of EEGNet models that combined AdaBN and alignment strategies could be further improved via the domain adaptation step in MEKT, achieving the best generalization ability among multiple datasets (BNCI2014001: 0.788, PhysionetMI: 0.679, Weibo2014: 0.753, Cho2017: 0.650).Significance. The combination of alignment strategies and AdaBN could easily improve the generalizability of DL models without fine-tuning. This study may provide new insights into the design of transfer neural networks for BCIs by separating source and target batch normalization layers in the domain adaptation process.
AbstractList Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in brain-computer interfaces (BCIs). However, current TL and DL algorithms are usually validated within a single dataset, assuming that data of the test subjects are acquired under the same condition as that of training (source) subjects. This assumption is generally violated in practice because of different acquisition systems and experimental settings across studies and datasets. Thus, the generalization ability of these algorithms needs further validations in a cross-dataset scenario, which is closer to the actual situation. This study compared the transfer performance of pre-trained deep-learning models with different preprocessing strategies in a cross-dataset scenario.Approach. This study used four publicly available motor imagery datasets, each was successively selected as a source dataset, and the others were used as target datasets. EEGNet and ShallowConvNet with four preprocessing strategies, namely channel normalization, trial normalization, Euclidean alignment, and Riemannian alignment, were trained with the source dataset. The transfer performance of pre-trained models was validated on the target datasets. This study also used adaptive batch normalization (AdaBN) for reducing interval covariate shift across datasets. This study compared the transfer performance of using the four preprocessing strategies and that of a baseline approach based on manifold embedded knowledge transfer (MEKT). This study also explored the possibility and performance of fusing MEKT and EEGNet.Main results. The results show that DL models with alignment strategies had significantly better transfer performance than the other two preprocessing strategies. As an unsupervised domain adaptation method, AdaBN could also significantly improve the transfer performance of DL models. The transfer performance of DL models that combined AdaBN and alignment strategies significantly outperformed MEKT. Moreover, the generalizability of EEGNet models that combined AdaBN and alignment strategies could be further improved via the domain adaptation step in MEKT, achieving the best generalization ability among multiple datasets (BNCI2014001: 0.788, PhysionetMI: 0.679, Weibo2014: 0.753, Cho2017: 0.650).Significance. The combination of alignment strategies and AdaBN could easily improve the generalizability of DL models without fine-tuning. This study may provide new insights into the design of transfer neural networks for BCIs by separating source and target batch normalization layers in the domain adaptation process.Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in brain-computer interfaces (BCIs). However, current TL and DL algorithms are usually validated within a single dataset, assuming that data of the test subjects are acquired under the same condition as that of training (source) subjects. This assumption is generally violated in practice because of different acquisition systems and experimental settings across studies and datasets. Thus, the generalization ability of these algorithms needs further validations in a cross-dataset scenario, which is closer to the actual situation. This study compared the transfer performance of pre-trained deep-learning models with different preprocessing strategies in a cross-dataset scenario.Approach. This study used four publicly available motor imagery datasets, each was successively selected as a source dataset, and the others were used as target datasets. EEGNet and ShallowConvNet with four preprocessing strategies, namely channel normalization, trial normalization, Euclidean alignment, and Riemannian alignment, were trained with the source dataset. The transfer performance of pre-trained models was validated on the target datasets. This study also used adaptive batch normalization (AdaBN) for reducing interval covariate shift across datasets. This study compared the transfer performance of using the four preprocessing strategies and that of a baseline approach based on manifold embedded knowledge transfer (MEKT). This study also explored the possibility and performance of fusing MEKT and EEGNet.Main results. The results show that DL models with alignment strategies had significantly better transfer performance than the other two preprocessing strategies. As an unsupervised domain adaptation method, AdaBN could also significantly improve the transfer performance of DL models. The transfer performance of DL models that combined AdaBN and alignment strategies significantly outperformed MEKT. Moreover, the generalizability of EEGNet models that combined AdaBN and alignment strategies could be further improved via the domain adaptation step in MEKT, achieving the best generalization ability among multiple datasets (BNCI2014001: 0.788, PhysionetMI: 0.679, Weibo2014: 0.753, Cho2017: 0.650).Significance. The combination of alignment strategies and AdaBN could easily improve the generalizability of DL models without fine-tuning. This study may provide new insights into the design of transfer neural networks for BCIs by separating source and target batch normalization layers in the domain adaptation process.
Author Xu, Minpeng
Ming, Dong
Jung, Tzyy-Ping
Wang, Kun
Xu, Lichao
Ma, Zhen
Author_xml – sequence: 1
  givenname: Lichao
  surname: Xu
  fullname: Xu, Lichao
– sequence: 2
  givenname: Minpeng
  surname: Xu
  fullname: Xu, Minpeng
– sequence: 3
  givenname: Zhen
  surname: Ma
  fullname: Ma, Zhen
– sequence: 4
  givenname: Kun
  surname: Wang
  fullname: Wang, Kun
– sequence: 5
  givenname: Tzyy-Ping
  surname: Jung
  fullname: Jung, Tzyy-Ping
– sequence: 6
  givenname: Dong
  surname: Ming
  fullname: Ming, Dong
BookMark eNpNTTtPwzAQtlCRaAs7o0eWQGw3TjKiqjykSiwwVxf73Bol5xA7DPwT_i0pIMRy3-l7LtiMAiFjlyK_FnlV3YhyJTJZFPIGjEArT9j8j5r9-8_YIsbXPFeirPM5-9zQAch42vM0AEWHA-9xcGHoJho5mCHEyC0kiJginwTeDOApM6HrxzTZPU3XgcHIx3gsAj5pjSdIPhAPjkPr99QhJR6nkYR7P3mBLAcLffLvyBtI5sDpuNr6j-_gOTt10Ea8-MUle7nbPK8fsu3T_eP6dpsZJVTKTClkqSWWTSlVLbTUoAoHWhVW1s6uxMq4phLWaSO1sLmpCqhsU5gKyxpcI5fs6qe3H8LbiDHtOh8Nti0QhjHuZKFlpUSdK_kFf6JyhQ
CitedBy_id crossref_primary_10_1109_TNSRE_2022_3207494
crossref_primary_10_1088_1741_2552_ad0a01
crossref_primary_10_3390_brainsci13020268
crossref_primary_10_3389_fpsyg_2022_839440
crossref_primary_10_1109_TNSRE_2022_3150007
crossref_primary_10_1109_JSEN_2025_3528009
crossref_primary_10_1109_TIM_2022_3181276
crossref_primary_10_1016_j_compbiomed_2023_107806
crossref_primary_10_1088_1741_2552_addd49
crossref_primary_10_1088_1741_2552_ac857d
crossref_primary_10_1109_TCSS_2024_3462823
crossref_primary_10_1155_2022_1603104
crossref_primary_10_1186_s12938_023_01129_4
crossref_primary_10_3390_s23052750
crossref_primary_10_1016_j_neucom_2024_128577
crossref_primary_10_3390_app12031695
crossref_primary_10_1016_j_jneumeth_2022_109535
crossref_primary_10_1088_1741_2552_ac8dc5
crossref_primary_10_1088_1741_2552_acfe9c
crossref_primary_10_3389_fncom_2022_909553
crossref_primary_10_1016_j_neunet_2025_107511
ContentType Journal Article
Copyright 2021 IOP Publishing Ltd.
Copyright_xml – notice: 2021 IOP Publishing Ltd.
DBID 7X8
DOI 10.1088/1741-2552/ac1ed2
DatabaseName MEDLINE - Academic
DatabaseTitle MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
Database_xml – sequence: 1
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod no_fulltext_linktorsrc
Discipline Anatomy & Physiology
EISSN 1741-2552
GroupedDBID ---
1JI
4.4
53G
5B3
5GY
5VS
5ZH
7.M
7.Q
7X8
AAGCD
AAJIO
AAJKP
AATNI
ABHWH
ABJNI
ABQJV
ABVAM
ACAFW
ACGFS
ACHIP
ADEQX
AEFHF
AEINN
AENEX
AFYNE
AKPSB
ALMA_UNASSIGNED_HOLDINGS
AOAED
ASPBG
ATQHT
AVWKF
AZFZN
CEBXE
CJUJL
CRLBU
CS3
DU5
EBS
EDWGO
EMSAF
EPQRW
EQZZN
F5P
IHE
IJHAN
IOP
IZVLO
KOT
LAP
M45
N5L
N9A
P2P
PJBAE
RIN
RO9
ROL
RPA
SY9
W28
XPP
ID FETCH-LOGICAL-c313t-c712762e7b72391626a35fa635d29fd414cfb81df6c261d0c85a8db5c8e79afb2
IEDL.DBID 7X8
ISICitedReferencesCount 23
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000692006100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1741-2552
IngestDate Sun Aug 24 03:02:59 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c313t-c712762e7b72391626a35fa635d29fd414cfb81df6c261d0c85a8db5c8e79afb2
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
PQID 2562831903
PQPubID 23479
ParticipantIDs proquest_miscellaneous_2562831903
PublicationCentury 2000
PublicationDate 2021-08-01
PublicationDateYYYYMMDD 2021-08-01
PublicationDate_xml – month: 08
  year: 2021
  text: 2021-08-01
  day: 01
PublicationDecade 2020
PublicationTitle Journal of neural engineering
PublicationYear 2021
SSID ssj0031790
Score 2.4646852
Snippet Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in...
SourceID proquest
SourceType Aggregation Database
Title Enhancing transfer performance across datasets for brain-computer interfaces using a combination of alignment strategies and adaptive batch normalization
URI https://www.proquest.com/docview/2562831903
Volume 18
WOSCitedRecordID wos000692006100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LSwMxEA5qPXjxVcU3I4i30N3sK3uSIi0epHhQ6a1k82gFzdbuKvhT_LdO0m178CJ43SzsI8l832S-mSHkSiVcszQRlAmODkouGEWWwamMA52lRSRiX774-T4bDPhwmD80B25VI6tc2ERvqFUp3Rl5B6EZkRDhK7qZvlPXNcpFV5sWGuukFSGVcZKubLiMIkSu-tQ8ITKkSJ1ZE6bEjdVZXusIGWrFfplijy_9nf--2S7ZbpgldOdLYY-sabtP2l2LXvXbF1yD13r6Q_Q2-e7ZiSu0YcdQe-aqZzBdpRCA8OAJTj5a6boCHIDCNZOgsukCAa7OxMw4QRc47fwYBOAY-tl-qqE0gAx_7LUGUNWLghQgrAKhxNRZWSgQByZg3VNfm3zQA_LU7z3e3tGmSQOV-MtrKrOQoUHVWZExl8TLUhElRiCPUSw3Kg5jaQokxSaV6KypQPJEcFUkkussF6Zgh2TDllYfEfy2SAU6D3DMxCLhuUT6wk0RcgTyKAuOyeViAka4CVxkQ1hdflSj1RSc_OGeU7LFnDLFy_jOSMvgRtfnZFN-1i_V7MKvoR_i8tWI
linkProvider ProQuest
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Enhancing+transfer+performance+across+datasets+for+brain-computer+interfaces+using+a+combination+of+alignment+strategies+and+adaptive+batch+normalization&rft.jtitle=Journal+of+neural+engineering&rft.au=Xu%2C+Lichao&rft.au=Xu%2C+Minpeng&rft.au=Ma%2C+Zhen&rft.au=Wang%2C+Kun&rft.date=2021-08-01&rft.issn=1741-2552&rft.eissn=1741-2552&rft.volume=18&rft.issue=4&rft_id=info:doi/10.1088%2F1741-2552%2Fac1ed2&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1741-2552&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1741-2552&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1741-2552&client=summon