Domain Invariant and Class Discriminative Feature Learning for Visual Domain Adaptation
Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains,...
Saved in:
| Published in: | IEEE transactions on image processing Vol. 27; no. 9; pp. 4260 - 4273 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.09.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly. |
|---|---|
| AbstractList | Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly. Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly.Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly. |
| Author | Cheng Wu Shiji Song Gao Huang Zhengming Ding Shuang Li |
| Author_xml | – sequence: 1 givenname: Shuang orcidid: 0000-0003-1910-7812 surname: Li fullname: Li, Shuang – sequence: 2 givenname: Shiji surname: Song fullname: Song, Shiji – sequence: 3 givenname: Gao surname: Huang fullname: Huang, Gao – sequence: 4 givenname: Zhengming orcidid: 0000-0002-6994-5278 surname: Ding fullname: Ding, Zhengming – sequence: 5 givenname: Cheng surname: Wu fullname: Wu, Cheng |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/29870346$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kc1r3DAQxUVJyVdzLxSCIJdevJ2RZFs-hk3TLiy0hzQ5mrE9LgpeeSPZgfz31Xa3OeTQ0-jwe29G752JIz96FuIjwgIRqi93q58LBWgXyuoqV_adOMXKYAZg1FF6Q15mJZrqRJzF-AiAJsfiWJyoypagTXEqHm7GDTkvV_6ZgiM_SfKdXA4Uo7xxsQ1u4zxN7pnlLdM0B5ZrpuCd_y37Mch7F2ca5MHluqPtlOjRfxDvexoiXxzmufh1-_Vu-T1b__i2Wl6vs9agnrJGKbK2MYy6VMqaUuedsV1f9rZV3CJT22NPtkFobEkV5IidpfR3hRqKQp-Lz3vfbRifZo5TvUlX8zCQ53GOtUoKKPICVUKv3qCP4xx8um5HAeSV-Wt4eaDmZsNdvU0JUHip_0WWANgDbRhjDNy_Igj1rpU6tVLvWqkPrSRJ8UbSun1MUyA3_E_4aS90zPy6x-pClbnWfwBspJdS |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_3233_IDA_215813 crossref_primary_10_1016_j_patcog_2024_111105 crossref_primary_10_1109_ACCESS_2020_3035422 crossref_primary_10_1109_JAS_2023_123342 crossref_primary_10_1109_TIP_2020_3013167 crossref_primary_10_1007_s00530_024_01339_3 crossref_primary_10_1007_s42044_023_00144_x crossref_primary_10_1080_10255842_2021_2023809 crossref_primary_10_1109_TCSVT_2020_3013604 crossref_primary_10_1109_TNNLS_2022_3192315 crossref_primary_10_1016_j_knosys_2022_109022 crossref_primary_10_1109_TMM_2023_3251094 crossref_primary_10_1007_s11063_022_10828_3 crossref_primary_10_1016_j_image_2021_116232 crossref_primary_10_1007_s40747_022_00887_3 crossref_primary_10_3390_sym12121994 crossref_primary_10_1007_s11704_022_1283_6 crossref_primary_10_1109_TIM_2021_3050154 crossref_primary_10_1109_TIP_2023_3261758 crossref_primary_10_1016_j_neucom_2021_12_060 crossref_primary_10_1016_j_neunet_2023_02_006 crossref_primary_10_1007_s00521_024_09755_2 crossref_primary_10_1109_TCSVT_2021_3073937 crossref_primary_10_1016_j_inffus_2022_10_026 crossref_primary_10_1109_TIP_2019_2950768 crossref_primary_10_1016_j_neucom_2022_05_042 crossref_primary_10_1016_j_ins_2023_01_109 crossref_primary_10_1109_TCSS_2023_3276990 crossref_primary_10_1016_j_patcog_2023_109379 crossref_primary_10_1016_j_patcog_2023_109654 crossref_primary_10_1016_j_patcog_2021_107870 crossref_primary_10_1109_TGRS_2020_2985072 crossref_primary_10_1177_01423312241284784 crossref_primary_10_1109_TIP_2021_3066046 crossref_primary_10_1109_TNNLS_2022_3199619 crossref_primary_10_1007_s11063_024_11677_y crossref_primary_10_1109_TIP_2021_3093393 crossref_primary_10_3390_rs16071293 crossref_primary_10_1109_TMM_2022_3145235 crossref_primary_10_1109_TNNLS_2022_3151683 crossref_primary_10_3390_s22114238 crossref_primary_10_1007_s10489_023_04706_1 crossref_primary_10_1007_s42044_019_00037_y crossref_primary_10_1109_TNNLS_2022_3183326 crossref_primary_10_1016_j_imavis_2023_104755 crossref_primary_10_1016_j_dsp_2023_104060 crossref_primary_10_1016_j_knosys_2023_110894 crossref_primary_10_1109_TKDE_2025_3537704 crossref_primary_10_1109_TNSRE_2023_3243257 crossref_primary_10_1109_TCYB_2022_3163432 crossref_primary_10_1007_s11760_025_03909_y crossref_primary_10_1109_TIP_2020_3018221 crossref_primary_10_3390_machines10040245 crossref_primary_10_1109_TETCI_2024_3412998 crossref_primary_10_1016_j_knosys_2022_109886 crossref_primary_10_1007_s11760_019_01587_1 crossref_primary_10_1007_s11063_019_10152_3 crossref_primary_10_1109_TPAMI_2021_3062644 crossref_primary_10_1109_TPAMI_2024_3507534 crossref_primary_10_1016_j_engappai_2024_107877 crossref_primary_10_1109_JIOT_2022_3218339 crossref_primary_10_1109_TCDS_2021_3055524 crossref_primary_10_1016_j_inffus_2023_101912 crossref_primary_10_1109_TPAMI_2020_2964173 crossref_primary_10_1109_TIM_2024_3368421 crossref_primary_10_1109_JAS_2023_123318 crossref_primary_10_1109_TIP_2018_2851067 crossref_primary_10_1007_s11280_024_01290_3 crossref_primary_10_1109_TIP_2020_3024728 crossref_primary_10_1016_j_knosys_2021_107158 crossref_primary_10_1109_TCSVT_2023_3243402 crossref_primary_10_1109_TNNLS_2022_3201623 crossref_primary_10_1109_JIOT_2024_3452670 crossref_primary_10_1109_TIP_2022_3189830 crossref_primary_10_1109_TIP_2024_3444190 crossref_primary_10_1007_s13042_023_02082_3 crossref_primary_10_1016_j_asoc_2025_113026 crossref_primary_10_1109_TNNLS_2021_3093468 crossref_primary_10_1007_s10044_021_01002_x crossref_primary_10_1109_TNSE_2021_3139335 crossref_primary_10_1016_j_knosys_2019_03_024 crossref_primary_10_1016_j_patcog_2023_109919 crossref_primary_10_1016_j_compeleceng_2021_107041 crossref_primary_10_1109_TIP_2021_3109530 crossref_primary_10_1109_TCSVT_2022_3192135 crossref_primary_10_1016_j_patcog_2025_112286 crossref_primary_10_1109_TSMC_2022_3155145 crossref_primary_10_1016_j_knosys_2021_107309 crossref_primary_10_1007_s40747_023_01283_1 crossref_primary_10_1109_TGRS_2023_3267149 crossref_primary_10_1109_TKDE_2021_3114536 crossref_primary_10_1016_j_patcog_2025_111661 crossref_primary_10_1016_j_knosys_2023_110734 crossref_primary_10_3390_math12162564 crossref_primary_10_1109_JIOT_2024_3421536 crossref_primary_10_1109_TCSVT_2021_3104835 crossref_primary_10_1016_j_engappai_2025_110661 crossref_primary_10_1109_JSEN_2025_3528441 crossref_primary_10_1016_j_patcog_2022_108657 crossref_primary_10_1109_TAI_2023_3293077 crossref_primary_10_1016_j_neucom_2020_07_124 crossref_primary_10_1109_TNNLS_2021_3133760 crossref_primary_10_1109_TEVC_2023_3346406 crossref_primary_10_1109_TMM_2021_3073258 crossref_primary_10_1109_TIP_2022_3193758 crossref_primary_10_1587_transinf_2020EDL8074 crossref_primary_10_1016_j_dsp_2020_102906 crossref_primary_10_1016_j_ymssp_2022_110001 crossref_primary_10_1109_TIP_2024_3364500 crossref_primary_10_1016_j_engappai_2023_106172 crossref_primary_10_1016_j_ipm_2023_103339 crossref_primary_10_1002_int_22629 crossref_primary_10_1109_TAFFC_2021_3077489 crossref_primary_10_1109_TIM_2023_3301053 crossref_primary_10_1109_TNNLS_2023_3327962 crossref_primary_10_1016_j_patcog_2024_111038 crossref_primary_10_1016_j_engappai_2022_104896 crossref_primary_10_1016_j_knosys_2024_112649 crossref_primary_10_1109_TIP_2019_2928630 crossref_primary_10_1016_j_ins_2021_07_073 crossref_primary_10_1016_j_dsp_2022_103424 crossref_primary_10_1109_TNNLS_2019_2958152 crossref_primary_10_1016_j_inffus_2025_103197 crossref_primary_10_1109_TAFFC_2022_3168834 crossref_primary_10_1007_s11042_023_15683_5 crossref_primary_10_1007_s10489_019_01610_5 crossref_primary_10_1016_j_patcog_2022_108700 crossref_primary_10_1016_j_neunet_2024_106629 crossref_primary_10_3390_s24155043 crossref_primary_10_1109_TKDE_2022_3185233 crossref_primary_10_1109_TCYB_2020_2994875 crossref_primary_10_1007_s12652_021_03426_z crossref_primary_10_1109_JAS_2022_106004 crossref_primary_10_1109_TIP_2020_3031220 crossref_primary_10_3390_rs17081395 crossref_primary_10_1016_j_patcog_2024_111270 crossref_primary_10_1109_TKDE_2024_3511714 crossref_primary_10_1016_j_neucom_2021_12_009 crossref_primary_10_1016_j_neunet_2023_08_005 crossref_primary_10_1016_j_neunet_2021_03_021 crossref_primary_10_1587_transinf_2021EDL8062 crossref_primary_10_1016_j_knosys_2021_107227 crossref_primary_10_1016_j_neucom_2020_05_098 crossref_primary_10_1109_TCYB_2020_3040763 crossref_primary_10_1016_j_patcog_2022_109127 crossref_primary_10_1016_j_neucom_2021_12_089 crossref_primary_10_1007_s10489_024_05351_y crossref_primary_10_1007_s10489_024_05706_5 crossref_primary_10_1007_s13042_020_01200_9 crossref_primary_10_1109_JSTARS_2022_3206753 crossref_primary_10_3390_s20164367 crossref_primary_10_1109_TIP_2022_3163527 crossref_primary_10_1007_s00530_025_01959_3 crossref_primary_10_1007_s11042_019_08474_4 crossref_primary_10_1109_TGRS_2022_3201688 crossref_primary_10_1109_TAI_2023_3345805 crossref_primary_10_1109_TASE_2024_3409621 crossref_primary_10_1002_ima_23110 crossref_primary_10_1016_j_patcog_2025_111675 crossref_primary_10_1016_j_neunet_2020_01_009 crossref_primary_10_1109_TNNLS_2024_3480120 crossref_primary_10_1007_s11760_020_01745_w crossref_primary_10_1109_TNNLS_2020_3027364 crossref_primary_10_1016_j_neunet_2020_08_016 crossref_primary_10_1109_TIP_2022_3226405 crossref_primary_10_1007_s11042_020_10193_0 crossref_primary_10_1109_TCSVT_2021_3087486 crossref_primary_10_1109_TIP_2019_2963389 crossref_primary_10_1016_j_neucom_2025_130902 crossref_primary_10_1109_TCYB_2021_3070545 crossref_primary_10_1109_TIP_2025_3526380 crossref_primary_10_1016_j_neucom_2021_01_062 crossref_primary_10_1016_j_knosys_2021_107192 crossref_primary_10_1109_ACCESS_2019_2933591 crossref_primary_10_1016_j_knosys_2023_110277 crossref_primary_10_1016_j_dsp_2023_104082 crossref_primary_10_1109_JIOT_2024_3490037 crossref_primary_10_1007_s11432_024_4273_8 crossref_primary_10_1109_TIP_2019_2929421 crossref_primary_10_1016_j_ins_2022_12_036 |
| Cites_doi | 10.1109/TKDE.2009.191 10.1137/1.9781611972771.73 10.1109/T-C.1975.224297 10.1109/TKDE.2013.111 10.1007/s11263-014-0696-6 10.1109/TNN.2010.2091281 10.1007/978-3-642-15561-1_16 10.1007/BF00994018 10.1023/A:1018628609742 10.1109/AFGR.2002.1004130 10.1109/TKDE.2009.126 10.1109/TPAMI.2009.57 10.1109/ICCV.2013.100 10.1109/CVPR.2015.7298629 10.1016/j.eswa.2011.01.061 10.1109/ICCV.2013.274 10.1109/TNNLS.2016.2538282 10.1109/TIP.2015.2510498 10.1109/CVPR.2014.183 10.1145/1291233.1291276 10.1109/TMM.2017.2724843 10.1109/TIP.2011.2134107 10.1109/TIP.2016.2609820 10.1109/TIP.2016.2631887 10.1109/CVPR.2014.318 10.1109/TPAMI.2011.114 10.1109/TIP.2016.2516952 10.1162/089976698300017467 10.1109/TCYB.2014.2307349 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2018.2839528 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | Technology Research Database PubMed MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 4273 |
| ExternalDocumentID | 29870346 10_1109_TIP_2018_2839528 8362753 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 41427806; 61273233 funderid: 10.13039/501100001809 – fundername: National Key Research and Development Program of China grantid: 2016YFB1200203 |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION AAYOK NPM PKN RIG Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c413t-b22a88b4e1372284735d48df7f8c2ec1eacf1fa8b10b87a90511d8a1092130663 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 221 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000434293500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Thu Oct 02 19:27:18 EDT 2025 Mon Jun 30 10:25:33 EDT 2025 Wed Feb 19 02:40:42 EST 2025 Sat Nov 29 03:21:08 EST 2025 Tue Nov 18 22:41:40 EST 2025 Wed Aug 27 02:56:05 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 9 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c413t-b22a88b4e1372284735d48df7f8c2ec1eacf1fa8b10b87a90511d8a1092130663 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-6994-5278 0000-0003-1910-7812 |
| PMID | 29870346 |
| PQID | 2050059466 |
| PQPubID | 85429 |
| PageCount | 14 |
| ParticipantIDs | proquest_miscellaneous_2051065612 crossref_primary_10_1109_TIP_2018_2839528 pubmed_primary_29870346 crossref_citationtrail_10_1109_TIP_2018_2839528 ieee_primary_8362753 proquest_journals_2050059466 |
| PublicationCentury | 2000 |
| PublicationDate | 2018-09-01 |
| PublicationDateYYYYMMDD | 2018-09-01 |
| PublicationDate_xml | – month: 09 year: 2018 text: 2018-09-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2018 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref12 shen (ref31) 2017 sugiyama (ref22) 2007; 8 ref14 ref11 ref10 ref17 ref18 hsu (ref19) 2003 xia (ref24) 2013 huang (ref21) 2007 pan (ref2) 2008; 8 hoffman (ref46) 2013 ref42 ref41 ref43 ref49 ref8 chen (ref7) 2011 ref4 ref40 tan (ref5) 2017 gretton (ref16) 2007; 19 ref35 wang (ref33) 2014 ref34 ref37 ref36 gong (ref3) 2013 chang (ref6) 2017 van der maaten (ref48) 2008; 9 ref30 ref32 ref1 ref39 donahue (ref47) 2014; 32 long (ref50) 2015 weinberger (ref15) 2009; 10 jolliffe (ref38) 2002 ref23 ref26 ref25 ref20 ref28 ref27 gong (ref9) 2012 ref29 griffin (ref44) 2007 nene (ref45) 1996 |
| References_xml | – year: 2013 ident: ref46 publication-title: Efficient learning of domain-invariant image representations – ident: ref1 doi: 10.1109/TKDE.2009.191 – volume: 8 start-page: 985 year: 2007 ident: ref22 article-title: Covariate shift adaptation by importance weighted cross validation publication-title: J Mach Learn Res – year: 2002 ident: ref38 publication-title: Principal Component Analysis – ident: ref35 doi: 10.1137/1.9781611972771.73 – ident: ref17 doi: 10.1109/T-C.1975.224297 – year: 2015 ident: ref50 publication-title: Learning transferable features with deep adaptation networks – start-page: 2066 year: 2012 ident: ref9 article-title: Geodesic flow kernel for unsupervised domain adaptation publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref27 doi: 10.1109/TKDE.2013.111 – ident: ref43 doi: 10.1007/s11263-014-0696-6 – start-page: 2099 year: 2014 ident: ref33 article-title: Cross-domain metric learning based on information theory publication-title: Proc 28th AAAI Conf Artif Intell – ident: ref12 doi: 10.1109/TNN.2010.2091281 – start-page: 2176 year: 2013 ident: ref24 article-title: Instance selection and instance weighting for cross-domain sentiment classification via PU learning publication-title: Proc 23rd Int Joint Conf Artif Intell – volume: 19 start-page: 1 year: 2007 ident: ref16 article-title: A kernel method for the two-sample-problem publication-title: Proc Adv Neural Inf Process Syst – ident: ref49 doi: 10.1007/978-3-642-15561-1_16 – ident: ref20 doi: 10.1007/BF00994018 – volume: 32 start-page: 647 year: 2014 ident: ref47 article-title: DeCAF: A deep convolutional activation feature for generic visual recognition publication-title: Proc Int Conf Mach Learn (ICML) – ident: ref18 doi: 10.1023/A:1018628609742 – year: 2003 ident: ref19 article-title: A practical guide to support vector classification – ident: ref42 doi: 10.1109/AFGR.2002.1004130 – year: 2017 ident: ref31 publication-title: On image classification Correlation v s Causality – ident: ref40 doi: 10.1109/TKDE.2009.126 – volume: 10 start-page: 207 year: 2009 ident: ref15 article-title: Distance metric learning for large margin nearest neighbor classification publication-title: J Mach Learn Res – ident: ref28 doi: 10.1109/TPAMI.2009.57 – ident: ref8 doi: 10.1109/ICCV.2013.100 – ident: ref34 doi: 10.1109/CVPR.2015.7298629 – start-page: 1 year: 2017 ident: ref5 article-title: Distant domain transfer learning publication-title: Proc AAAI – ident: ref37 doi: 10.1016/j.eswa.2011.01.061 – year: 2007 ident: ref44 article-title: Caltech-256 object category dataset – ident: ref10 doi: 10.1109/ICCV.2013.274 – ident: ref23 doi: 10.1109/TNNLS.2016.2538282 – ident: ref41 doi: 10.1109/TIP.2015.2510498 – ident: ref11 doi: 10.1109/CVPR.2014.183 – ident: ref29 doi: 10.1145/1291233.1291276 – volume: 8 start-page: 677 year: 2008 ident: ref2 article-title: Transfer learning via dimensionality reduction publication-title: Proc AAAI – year: 1996 ident: ref45 article-title: Columbia object image library (coil-20) – ident: ref30 doi: 10.1109/TMM.2017.2724843 – ident: ref13 doi: 10.1109/TIP.2011.2134107 – ident: ref32 doi: 10.1109/TIP.2016.2609820 – start-page: 1 year: 2007 ident: ref21 article-title: olkopf, "Correcting sample selection bias by unlabeled data publication-title: Proc Adv Neural Inf Process Syst – start-page: 2456 year: 2011 ident: ref7 article-title: Co-training for domain adaptation publication-title: Proc Adv Neural Inf Process Syst – start-page: 222 year: 2013 ident: ref3 article-title: Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation publication-title: Proc 30th Int Conf Mach Learn – ident: ref14 doi: 10.1109/TIP.2016.2631887 – ident: ref25 doi: 10.1109/CVPR.2014.318 – ident: ref4 doi: 10.1109/TPAMI.2011.114 – ident: ref26 doi: 10.1109/TIP.2016.2516952 – ident: ref39 doi: 10.1162/089976698300017467 – volume: 9 start-page: 2579 year: 2008 ident: ref48 article-title: Visualizing data using t-SNE publication-title: J Mach Learn Res – ident: ref36 doi: 10.1109/TCYB.2014.2307349 – start-page: 1763 year: 2017 ident: ref6 article-title: Cross-domain kernel induction for transfer learning publication-title: Proc 31st AAAI Conf Artif Intell |
| SSID | ssj0014516 |
| Score | 2.642993 |
| Snippet | Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 4260 |
| SubjectTerms | Adaptation Adaptation models Classification Data models Domain adaptation Feature extraction Invariants Learning systems Measurement Regression models Representations subspace learning Visual discrimination Visual tasks Visualization |
| Title | Domain Invariant and Class Discriminative Feature Learning for Visual Domain Adaptation |
| URI | https://ieeexplore.ieee.org/document/8362753 https://www.ncbi.nlm.nih.gov/pubmed/29870346 https://www.proquest.com/docview/2050059466 https://www.proquest.com/docview/2051065612 |
| Volume | 27 |
| WOSCitedRecordID | wos000434293500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB61FQc4UGh5BEplJC5IpJs4TuwcK0pFJVT1UMreIj-rlSBb7W76-5lxvFEPgMQtUvySP3vms2c8A_ChCC44G3xehqLOhfI2N7Uoc6-5EbwwqohvYW6-yctLNZ-3VzvwaXoL472Pzmf-hD6jLd8t7UBXZTNVUUzdahd2pWzGt1qTxYASzkbLZi1zibR_a5Is2tn1xRX5cKkTVKVtTXnXH6igmFPl7_Qyqpnz_f8b4DN4mugkOx3xfw47vj-A_UQtWdq46wN48iDu4CH8OFv-0oueXfT3eFTGuWW6dyzmx2RnCxIk5CBDgpARRRxWnqU4rLcMSS67WawH7DW1cur03WjQfwHfz79cf_6apwwLuUXltckN51opI3xZSU6KqqqdUC7IoCz3tkSpHMqglSkRNakpllfplMZJ5aj7kKy8hL1-2fvXwLjRXBqJDMPWwohguK18I1xQ3pdSmQxm20nvbAo_TlkwfnbxGFK0HcLUEUxdgimDj1ONuzH0xj_KHhIaU7kERAZHW1y7tDfXWK-OUWqaJoP302_cVWQq0b1fDrEMnpUpc2gGr8b1MLXNW5RxlWje_LnPt_CYRjb6oR3B3mY1-HfwyN5vFuvVMS7duTqOS_c3a4DphA |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB6VgkQ5UGh5BAoYiQsS6SaOs3GOFaXqimXVw1J6i_xEK9Fstbvp72fG8UY9ABK3SPFLHnvms2c8H8CHzFtvjXdp7rMyFdKZVJciT53iWvBMyyy8hbmcVrOZvLqqL3bg0_AWxjkXgs_cMX0GX75dmo6uykayoJy6xT24XwpsqH-tNfgMiHI2-DbLKq0Q-G-dklk9mk8uKIpLHqMxrUtiXr9jhAKryt8BZjA0Z_v_N8Qn8DgCSnbSr4CnsOPaA9iP4JLFrbs-gEd3Mg8ewo_T5bVatGzS3uJhGWeXqdaywJDJThekSihEhlQhI5DYrRyLmVh_MoS57HKx7rDX2MqJVTe9S_8ZfD_7Mv98nkaOhdSg-dqkmnMlpRYuLypOpqoorZDWV14a7kyOetnnXkmdo9wqRdm8cisVTipH64dw5TnstsvWvQTGteKVrhBjmFJo4TU3hRsL66VzeSV1AqPtpDcmJiAnHoxfTTiIZHWDYmpITE0UUwIfhxo3ffKNf5Q9JGkM5aIgEjjayrWJu3ON9cqQp2Y8TuD98Bv3FTlLVOuWXSiDp2XiDk3gRb8ehrZ5jVquEONXf-7zHTw8n3-bNtPJ7Otr2KNR9lFpR7C7WXXuDTwwt5vFevU2LODfNIHr4w |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Domain+Invariant+and+Class+Discriminative+Feature+Learning+for+Visual+Domain+Adaptation&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Li%2C+Shuang&rft.au=Song%2C+Shiji&rft.au=Huang%2C+Gao&rft.au=Ding%2C+Zhengming&rft.date=2018-09-01&rft.eissn=1941-0042&rft.volume=27&rft.issue=9&rft.spage=4260&rft_id=info:doi/10.1109%2FTIP.2018.2839528&rft_id=info%3Apmid%2F29870346&rft.externalDocID=29870346 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |