Maximum Density Divergence for Domain Adaptation

Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on pattern analysis and machine intelligence Ročník 43; číslo 11; s. 3918 - 3930
Hlavní autoři: Li, Jingjing, Chen, Erpeng, Ding, Zhengming, Zhu, Lei, Lu, Ke, Shen, Heng Tao
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.11.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named adversarial tight match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named maximum density divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations.
AbstractList Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named adversarial tight match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named maximum density divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence (“match” in ATM) and maximizes the intra-class density (“tight” in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations.
Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named Maximum Density Divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on five benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations.
Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named adversarial tight match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named maximum density divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations.Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named adversarial tight match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named maximum density divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations.
Author Chen, Erpeng
Zhu, Lei
Lu, Ke
Ding, Zhengming
Li, Jingjing
Shen, Heng Tao
Author_xml – sequence: 1
  givenname: Jingjing
  orcidid: 0000-0002-5504-2529
  surname: Li
  fullname: Li, Jingjing
  email: lijin117@yeah.net
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 2
  givenname: Erpeng
  surname: Chen
  fullname: Chen, Erpeng
  email: cep1126@163.com
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 3
  givenname: Zhengming
  surname: Ding
  fullname: Ding, Zhengming
  email: zd2@iu.edu
  organization: Department of Computer, Information and Technology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
– sequence: 4
  givenname: Lei
  orcidid: 0000-0002-2993-7142
  surname: Zhu
  fullname: Zhu, Lei
  email: leizhu0608@gmail.com
  organization: Shandong Normal University, Jinan, China
– sequence: 5
  givenname: Ke
  orcidid: 0000-0002-3456-4993
  surname: Lu
  fullname: Lu, Ke
  email: kel@uestc.edu.cn
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 6
  givenname: Heng Tao
  orcidid: 0000-0002-2999-2088
  surname: Shen
  fullname: Shen, Heng Tao
  email: shenhengtao@hotmail.com
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32356736$$D View this record in MEDLINE/PubMed
BookMark eNp9kD1PwzAQhi0EglL4AyChSCwsKedz7Nhj1fIlgWCA2TLJBRk1SbETBP-elBYGBpa75Xle3b37bLtpG2LsiMOEczDnjw_Tu5sJAsIEjeEgYYuNkCtIDRrcZiPgClOtUe-x_RhfAXgmQeyyPYFCqlyoEYM79-Hrvk7m1ETffSZz_07hhZqCkqoNybytnW-SaemWnet82xywncotIh1u9pg9XV48zq7T2_urm9n0Ni2E5F2KGiCrwBSOXMazUoEwmpsKMq0V11BJetY5SMolOENSIQ2jyLXmgFSWYszO1rnL0L71FDtb-1jQYuEaavtoUZhc5auwAT39g762fWiG6yzKXKNQODw8Zicbqn-uqbTL4GsXPu1PFwOg10AR2hgDVbbw65-74PzCcrCr2u137XZVu93UPqj4R_1J_1c6XkueiH4FAxo4l-IL9lGKCg
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1016_j_patcog_2024_110653
crossref_primary_10_32604_cmc_2024_049484
crossref_primary_10_1109_TPAMI_2025_3541207
crossref_primary_10_1109_JAS_2023_123342
crossref_primary_10_1016_j_patcog_2023_109787
crossref_primary_10_1109_TMM_2023_3251094
crossref_primary_10_1007_s10845_023_02232_y
crossref_primary_10_1109_TMM_2023_3272742
crossref_primary_10_1007_s00530_024_01444_3
crossref_primary_10_1016_j_neunet_2020_06_016
crossref_primary_10_1109_JIOT_2024_3466924
crossref_primary_10_1109_TIP_2023_3261758
crossref_primary_10_1109_TKDE_2021_3060473
crossref_primary_10_1016_j_eswa_2022_117978
crossref_primary_10_1109_TCSVT_2022_3230963
crossref_primary_10_1109_TPAMI_2021_3071225
crossref_primary_10_1117_1_JEI_33_5_053015
crossref_primary_10_1016_j_media_2024_103103
crossref_primary_10_1109_TGRS_2025_3531424
crossref_primary_10_1016_j_ins_2023_119240
crossref_primary_10_1049_2024_1561351
crossref_primary_10_1007_s10115_023_02043_w
crossref_primary_10_1109_TIP_2022_3216781
crossref_primary_10_1109_TNNLS_2022_3199619
crossref_primary_10_1016_j_neucom_2023_126624
crossref_primary_10_1109_TCSVT_2023_3242614
crossref_primary_10_1109_TCSVT_2022_3232112
crossref_primary_10_1109_TTE_2021_3109636
crossref_primary_10_1007_s10462_024_10858_4
crossref_primary_10_1088_1742_6596_2278_1_012032
crossref_primary_10_1007_s13042_024_02165_9
crossref_primary_10_1109_TCYB_2021_3104612
crossref_primary_10_1109_TMI_2022_3193146
crossref_primary_10_1109_TNSE_2023_3304986
crossref_primary_10_1016_j_patcog_2022_108987
crossref_primary_10_1145_3565368
crossref_primary_10_1109_TIP_2022_3140614
crossref_primary_10_1016_j_imavis_2023_104755
crossref_primary_10_1088_1741_2552_ad09ff
crossref_primary_10_1109_TGRS_2024_3480091
crossref_primary_10_1109_TCSVT_2024_3440517
crossref_primary_10_1109_TMM_2023_3245420
crossref_primary_10_1109_TPEL_2022_3220760
crossref_primary_10_1109_TPAMI_2021_3128560
crossref_primary_10_1007_s11432_022_3851_1
crossref_primary_10_1109_TMI_2025_3525902
crossref_primary_10_1109_TPAMI_2023_3238727
crossref_primary_10_1007_s13042_022_01608_5
crossref_primary_10_1109_ACCESS_2021_3052511
crossref_primary_10_1109_TCYB_2022_3163432
crossref_primary_10_1109_TPAMI_2025_3557502
crossref_primary_10_1109_LSP_2020_3025061
crossref_primary_10_1109_ACCESS_2021_3087867
crossref_primary_10_1016_j_eswa_2025_126744
crossref_primary_10_1016_j_bspc_2024_107483
crossref_primary_10_1109_TCCN_2024_3485083
crossref_primary_10_1016_j_patcog_2024_110473
crossref_primary_10_1016_j_neucom_2025_131498
crossref_primary_10_1109_TMM_2021_3134168
crossref_primary_10_1007_s00530_024_01314_y
crossref_primary_10_1109_TIM_2022_3200667
crossref_primary_10_1109_TCYB_2021_3071451
crossref_primary_10_1109_TPAMI_2024_3507534
crossref_primary_10_1145_3632524
crossref_primary_10_1016_j_compbiomed_2024_107998
crossref_primary_10_1109_TMM_2022_3145237
crossref_primary_10_1016_j_inffus_2023_101912
crossref_primary_10_1145_3729483
crossref_primary_10_1109_TNNLS_2021_3070085
crossref_primary_10_3390_rs16111983
crossref_primary_10_1109_TIP_2024_3353539
crossref_primary_10_1109_TNNLS_2021_3134673
crossref_primary_10_1109_TIM_2025_3580817
crossref_primary_10_1109_TMM_2022_3148592
crossref_primary_10_1109_TIM_2025_3557120
crossref_primary_10_1109_TIM_2023_3309381
crossref_primary_10_1007_s10489_022_03306_9
crossref_primary_10_1016_j_compind_2021_103572
crossref_primary_10_1016_j_knosys_2022_110205
crossref_primary_10_1109_JIOT_2025_3573713
crossref_primary_10_1016_j_patcog_2024_110409
crossref_primary_10_1109_ACCESS_2020_3007147
crossref_primary_10_1016_j_apenergy_2025_126691
crossref_primary_10_1109_TIP_2022_3152052
crossref_primary_10_1109_TNNLS_2021_3073119
crossref_primary_10_1007_s41060_021_00274_0
crossref_primary_10_1016_j_neunet_2020_11_015
crossref_primary_10_1109_TAI_2022_3196813
crossref_primary_10_1109_TMM_2023_3312917
crossref_primary_10_1007_s10462_022_10230_4
crossref_primary_10_1109_TNNLS_2021_3105868
crossref_primary_10_1016_j_imavis_2023_104695
crossref_primary_10_1109_TCYB_2021_3052536
crossref_primary_10_1109_TPAMI_2024_3438154
crossref_primary_10_3390_e24010044
crossref_primary_10_1007_s10994_025_06751_y
crossref_primary_10_1007_s11517_025_03287_0
crossref_primary_10_1016_j_ipm_2024_103730
crossref_primary_10_1109_ACCESS_2020_3015334
crossref_primary_10_1109_TGRS_2024_3412401
crossref_primary_10_1109_TMM_2021_3063616
crossref_primary_10_1109_TPAMI_2021_3109287
crossref_primary_10_1109_TPAMI_2024_3370978
crossref_primary_10_1016_j_eswa_2024_124509
crossref_primary_10_1007_s11263_025_02497_1
crossref_primary_10_1016_j_knosys_2025_113850
crossref_primary_10_1007_s00521_023_08269_7
crossref_primary_10_1109_TIM_2025_3527531
crossref_primary_10_1109_TNNLS_2021_3071474
crossref_primary_10_1016_j_engappai_2024_109324
crossref_primary_10_1109_TNNLS_2022_3177769
crossref_primary_10_1109_TSMC_2022_3195239
crossref_primary_10_1016_j_ipm_2020_102367
crossref_primary_10_1109_TIM_2022_3217563
crossref_primary_10_1109_TNNLS_2021_3072041
crossref_primary_10_1109_TKDE_2025_3592614
crossref_primary_10_1109_TGRS_2025_3559915
crossref_primary_10_1109_TIP_2025_3533199
crossref_primary_10_1016_j_inffus_2023_102020
crossref_primary_10_1016_j_media_2022_102607
crossref_primary_10_3390_s25092818
crossref_primary_10_1109_TFUZZ_2024_3367460
crossref_primary_10_1016_j_measurement_2023_112937
crossref_primary_10_1109_TTE_2023_3293551
crossref_primary_10_1007_s11042_023_15683_5
crossref_primary_10_1109_TMM_2022_3233398
crossref_primary_10_1109_TMM_2022_3233306
crossref_primary_10_1007_s10994_023_06432_8
crossref_primary_10_1109_ACCESS_2021_3098713
crossref_primary_10_1109_TITS_2023_3314680
crossref_primary_10_1007_s11263_024_02174_9
crossref_primary_10_1016_j_neunet_2024_106626
crossref_primary_10_1109_TPAMI_2024_3412680
crossref_primary_10_1109_TCYB_2021_3110128
crossref_primary_10_1109_TNSE_2022_3201529
crossref_primary_10_1016_j_engappai_2025_112330
crossref_primary_10_1109_TNNLS_2022_3193289
crossref_primary_10_1016_j_ins_2022_01_044
crossref_primary_10_1109_TGRS_2023_3320805
crossref_primary_10_1109_TCSVT_2023_3249200
crossref_primary_10_1109_TCYB_2021_3054978
crossref_primary_10_1109_TPAMI_2025_3572963
crossref_primary_10_1016_j_patcog_2024_111270
crossref_primary_10_1109_ACCESS_2023_3305924
crossref_primary_10_1016_j_neunet_2023_08_005
crossref_primary_10_1109_TSMC_2023_3324735
crossref_primary_10_3390_math10183223
crossref_primary_10_1109_TIP_2023_3318955
crossref_primary_10_1016_j_ins_2022_07_083
crossref_primary_10_1109_ACCESS_2021_3123281
crossref_primary_10_1109_TGRS_2024_3367922
crossref_primary_10_1109_ACCESS_2020_3027850
crossref_primary_10_1109_TMM_2021_3087098
crossref_primary_10_1109_TIP_2025_3541868
crossref_primary_10_1109_TIFS_2025_3569409
crossref_primary_10_1109_ACCESS_2021_3076533
crossref_primary_10_1016_j_compbiomed_2023_106570
crossref_primary_10_1016_j_patcog_2025_112095
crossref_primary_10_1016_j_ins_2024_121359
crossref_primary_10_1016_j_patrec_2022_06_015
crossref_primary_10_1109_TIFS_2025_3576576
crossref_primary_10_1109_TPEL_2023_3251568
crossref_primary_10_1109_TIM_2025_3542138
crossref_primary_10_1109_TII_2020_3032690
crossref_primary_10_1109_TCYB_2021_3133890
crossref_primary_10_1016_j_aei_2024_102948
crossref_primary_10_1109_ACCESS_2023_3337438
crossref_primary_10_1016_j_patcog_2022_108960
crossref_primary_10_1109_TMM_2024_3407697
crossref_primary_10_1109_ACCESS_2022_3218337
crossref_primary_10_1109_TMM_2022_3141886
crossref_primary_10_1016_j_knosys_2025_114316
crossref_primary_10_1088_1741_2552_ad9957
crossref_primary_10_1109_TIP_2025_3526380
crossref_primary_10_1016_j_knosys_2023_110397
crossref_primary_10_1109_JIOT_2024_3490037
crossref_primary_10_3390_app15052625
crossref_primary_10_1109_TPAMI_2024_3444904
crossref_primary_10_1109_TPAMI_2022_3190645
Cites_doi 10.24963/ijcai.2018/767
10.1109/TNNLS.2018.2868854
10.1109/CVPR.2016.90
10.1109/CVPR.2017.572
10.1109/CVPR.2018.00845
10.1109/TNN.2010.2091281
10.1109/TIP.2018.2851067
10.1145/3343031.3350902
10.1007/s11263-015-0816-y
10.1007/978-3-319-46493-0_36
10.1109/CVPR.2017.316
10.1007/978-3-319-49409-8_35
10.1007/978-3-642-15561-1_16
10.1090/mbk/107
10.1109/TCYB.2020.2974106
10.1007/s11263-014-0696-6
10.1109/TCYB.2018.2820174
10.1109/TPAMI.2019.2945942
10.1007/s10994-009-5152-4
10.1109/CVPR.2018.00887
10.1109/TKDE.2009.191
10.1109/TIP.2019.2924174
10.1109/TPAMI.2020.2964173
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2020.2991050
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList Technology Research Database
PubMed

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Xplore
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 3930
ExternalDocumentID 32356736
10_1109_TPAMI_2020_2991050
9080115
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61806039; 61832001
  funderid: 10.13039/501100001809
– fundername: National Key Research and Development Program of China
  grantid: 2018YFE0203900
  funderid: 10.13039/501100012166
– fundername: Sichuan Department of Science and Technology
  grantid: 20ZDYF2771
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
AAYXX
CITATION
NPM
RIC
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c351t-28004f09caea414d6039819f04886180f5eb8705e750a9e562ee56c788102edd3
IEDL.DBID RIE
ISICitedReferencesCount 204
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000702649700017&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
1939-3539
IngestDate Mon Sep 29 05:46:50 EDT 2025
Sun Nov 09 06:08:59 EST 2025
Wed Feb 19 02:30:42 EST 2025
Sat Nov 29 05:15:59 EST 2025
Tue Nov 18 22:13:16 EST 2025
Wed Aug 27 02:26:59 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 11
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-28004f09caea414d6039819f04886180f5eb8705e750a9e562ee56c788102edd3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-3456-4993
0000-0002-5504-2529
0000-0002-2999-2088
0000-0002-2993-7142
PMID 32356736
PQID 2578236232
PQPubID 85458
PageCount 13
ParticipantIDs proquest_miscellaneous_2397678861
ieee_primary_9080115
proquest_journals_2578236232
pubmed_primary_32356736
crossref_citationtrail_10_1109_TPAMI_2020_2991050
crossref_primary_10_1109_TPAMI_2020_2991050
PublicationCentury 2000
PublicationDate 2021-11-01
PublicationDateYYYYMMDD 2021-11-01
PublicationDate_xml – month: 11
  year: 2021
  text: 2021-11-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref35
ref12
ref37
ref15
ref36
gretton (ref17) 2012; 13
ref33
ref11
ref10
netzer (ref34) 2011
ref2
ref1
ref38
ref18
van der maaten (ref44) 2008; 9
goodfellow (ref13) 2014
liu (ref19) 2016
ganin (ref20) 2016; 17
tzeng (ref41) 2014
ref23
ref26
long (ref4) 2018
hoffman (ref3) 2018
ref42
ref22
ref21
xie (ref32) 2018
krizhevsky (ref43) 2012
arora (ref14) 2017
ref28
zhao (ref31) 2019
long (ref16) 2017
ref8
ref7
mirza (ref29) 2014
shu (ref30) 2018
ref9
french (ref39) 2018
ref6
székely (ref27) 2003; 3
ref40
zhuang (ref25) 2015
ganin (ref5) 2015
long (ref24) 2015
References_xml – volume: 3
  start-page: 1
  year: 2003
  ident: ref27
  article-title: E-statistics: The energy of statistical samples
  publication-title: Bowling Green State Univ Dept Math Statist Tech Rep
– start-page: 1994
  year: 2018
  ident: ref3
  article-title: CyCADA: Cycle-consistent adversarial domain adaptation
  publication-title: Proc Int Conf Mach Learn
– ident: ref12
  doi: 10.24963/ijcai.2018/767
– ident: ref21
  doi: 10.1109/TNNLS.2018.2868854
– ident: ref36
  doi: 10.1109/CVPR.2016.90
– ident: ref11
  doi: 10.1109/CVPR.2017.572
– ident: ref37
  doi: 10.1109/CVPR.2018.00845
– ident: ref40
  doi: 10.1109/TNN.2010.2091281
– year: 2018
  ident: ref39
  article-title: Self-ensembling for visual domain adaptation
  publication-title: Proc Int Conf Learn Representations
– year: 2014
  ident: ref29
  article-title: Conditional generative adversarial nets
– ident: ref22
  doi: 10.1109/TIP.2018.2851067
– year: 2014
  ident: ref41
  article-title: Deep domain confusion: Maximizing for domain invariance
– start-page: 2672
  year: 2014
  ident: ref13
  article-title: Generative adversarial nets
  publication-title: Proc 27th Int Conf Neural Inf Process Syst
– start-page: 224
  year: 2017
  ident: ref14
  article-title: Generalization and equilibrium in generative adversarial nets (GANs)
  publication-title: Proc Int Conf Mach Learn
– ident: ref28
  doi: 10.1145/3343031.3350902
– year: 2011
  ident: ref34
  article-title: Reading digits in natural images with unsupervised feature learning
  publication-title: Proc NeurIPS Workshop Deep Learn Unsupervised Feature Learn
– start-page: 1180
  year: 2015
  ident: ref5
  article-title: Unsupervised domain adaptation by backpropagation
– ident: ref1
  doi: 10.1007/s11263-015-0816-y
– volume: 17
  start-page: 2096
  year: 2016
  ident: ref20
  article-title: Domain-adversarial training of neural networks
  publication-title: J Mach Learn Res
– volume: 13
  start-page: 723
  year: 2012
  ident: ref17
  article-title: A kernel two-sample test
  publication-title: J Mach Learn Res
– ident: ref38
  doi: 10.1007/978-3-319-46493-0_36
– ident: ref7
  doi: 10.1109/CVPR.2017.316
– ident: ref15
  doi: 10.1007/978-3-319-49409-8_35
– start-page: 469
  year: 2016
  ident: ref19
  article-title: Coupled generative adversarial networks
  publication-title: Proc Int Conf Neural Inf Process
– start-page: 4119
  year: 2015
  ident: ref25
  article-title: Supervised representation learning: Transfer learning with deep autoencoders
  publication-title: Proc Int Joint Conf Artif Intell
– ident: ref35
  doi: 10.1007/978-3-642-15561-1_16
– ident: ref33
  doi: 10.1090/mbk/107
– start-page: 1647
  year: 2018
  ident: ref4
  article-title: Conditional adversarial domain adaptation
  publication-title: Proc 32nd Int Conf Neural Inf Process Syst
– ident: ref23
  doi: 10.1109/TCYB.2020.2974106
– start-page: 2208
  year: 2017
  ident: ref16
  article-title: Deep transfer learning with joint adaptation networks
  publication-title: Proc Int Conf Mach Learn
– start-page: 5419
  year: 2018
  ident: ref32
  article-title: Learning semantic representations for unsupervised domain adaptation
  publication-title: Proc Int Conf Mach Learn
– volume: 9
  start-page: 2579
  year: 2008
  ident: ref44
  article-title: Visualizing data using t-SNE
  publication-title: J Mach Learn Res
– ident: ref6
  doi: 10.1007/s11263-014-0696-6
– ident: ref10
  doi: 10.1109/TCYB.2018.2820174
– start-page: 97
  year: 2015
  ident: ref24
  article-title: Learning transferable features with deep adaptation networks
  publication-title: Proc Int Conf Mach Learn
– ident: ref9
  doi: 10.1109/TPAMI.2019.2945942
– ident: ref18
  doi: 10.1007/s10994-009-5152-4
– ident: ref42
  doi: 10.1109/CVPR.2018.00887
– ident: ref2
  doi: 10.1109/TKDE.2009.191
– year: 2018
  ident: ref30
  article-title: A DIRT-T approach to unsupervised domain adaptation
– ident: ref26
  doi: 10.1109/TIP.2019.2924174
– ident: ref8
  doi: 10.1109/TPAMI.2020.2964173
– start-page: 7523
  year: 2019
  ident: ref31
  article-title: On learning invariant representations for domain adaptation
  publication-title: Proc Int Conf Mach Learn
– start-page: 1097
  year: 2012
  ident: ref43
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Int Conf Neural Inf Process
SSID ssj0014503
Score 2.7309418
Snippet Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 3918
SubjectTerms Adaptation
Adaptation models
adversarial learning
Benchmark testing
Density
Domain adaptation
Domains
Empirical analysis
Games
Kernel
Learning
Measurement
Performance evaluation
Task analysis
Training
transfer learning
Title Maximum Density Divergence for Domain Adaptation
URI https://ieeexplore.ieee.org/document/9080115
https://www.ncbi.nlm.nih.gov/pubmed/32356736
https://www.proquest.com/docview/2578236232
https://www.proquest.com/docview/2397678861
Volume 43
WOSCitedRecordID wos000702649700017&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB5q8aAH34_4IoI3jW6yG5Mci1X0UPGg0FvYbCZQsKnYVvTfO7N5oKCClxDI5sE8dr7JvABOwpggm5GFp_xCeEpr5WVSGU9hLFVIJgVzY4dNRPf38XCYPHTgrK2FQUSbfIbnfGpj-fnEzPlX2UVC8MbnivKFKIqqWq02YqBCOwWZEAxpOLkRTYGMSC4eH3qDO3IFA3FOmy8BCh7_JgMZck7TN3tkB6z8jjWtzblZ_d_XrsFKjS3dXiUM69DBcgNWm7kNbq3GG7D8pQnhJoiBfh-N52O3z7nssw-3z6katkenS4jW7U_GelS6vVy_VGH7LXi6uX68uvXqOQqekaE_8wIChaoQidGola_ySyETAgIFK--lH4sixIzUNkRCDzpBQkRIB8ON5kWAeS63oVtOStzlRCihTJRxl75IoYm1jrLQ15kmJKESkzvgN9RMTd1knGddPKfW2RBJapmRMjPSmhkOnLb3vFQtNv5cvcmkblfWVHbgoGFaWmvhNOXtKCALLQMHjtvLpD8cFNElTua0hgFZxIRwYKdidvvsRkb2fn7nPiwFnOFiKxMPoDt7neMhLJq32Wj6ekRCOoyPrJB-AiVF3Mc
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1ZS-RAEC5EhdUH7yMeaxb2TaOddLdJHocdRVln8GEWfAudTgUGnIzojOi_t6pz4MLuwr6EQDoHVV1dX6eOD-C7TgiyWVkGKixFoIxRQS6VDRQmUmlyKVhYRzYRD4fJw0N6vwBnXS0MIrrkMzznUxfLL6Z2zr_KLlKCNyFXlC9ppaKwrtbqYgZKOx5kwjBk47SRaEtkRHoxuu8NbmkzGIlzWn4JUjABnIyk5qym3zySo1j5O9p0Xud6_f--dwPWGnTp9-rpsAkLWG3Besvc4DeGvAWrn9oQboMYmLfxZD7x-5zNPnv3-5ys4bp0-oRp_f50YsaV3yvMUx2434Ff11ejHzdBw6QQWKnDWRARLFSlSK1Bo0JVXAqZEhQo2Xwvw0SUGnMyXI2EH0yKhImQDpZbzYsIi0LuwmI1rXCfU6GEsnHOffpihTYxJs51aHJDWEKltvAgbKWZ2abNOLNdPGZuuyHSzCkjY2VkjTI8OO3ueaqbbPxz9DaLuhvZSNmDo1ZpWWOHLxkvSBH5aBl58K27TBbEYRFT4XROYxiSxSwID_ZqZXfPbufIwZ_feQJfbkaDu-zudvjzEFYizndxdYpHsDh7nuMxLNvX2fjl-aubqh-Y6d8m
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Maximum+Density+Divergence+for+Domain+Adaptation&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Li%2C+Jingjing&rft.au=Chen%2C+Erpeng&rft.au=Ding%2C+Zhengming&rft.au=Zhu%2C+Lei&rft.date=2021-11-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=43&rft.issue=11&rft.spage=3918&rft.epage=3930&rft_id=info:doi/10.1109%2FTPAMI.2020.2991050&rft_id=info%3Apmid%2F32356736&rft.externalDocID=9080115
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon