Domain Space Transfer Extreme Learning Machine for Domain Adaptation
Extreme learning machine (ELM) has been applied in a wide range of classification and regression problems due to its high accuracy and efficiency. However, ELM can only deal with cases where training and testing data are from identical distribution, while in real world situations, this assumption is...
Saved in:
| Published in: | IEEE transactions on cybernetics Vol. 49; no. 5; pp. 1909 - 1922 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.05.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 2168-2267, 2168-2275, 2168-2275 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Extreme learning machine (ELM) has been applied in a wide range of classification and regression problems due to its high accuracy and efficiency. However, ELM can only deal with cases where training and testing data are from identical distribution, while in real world situations, this assumption is often violated. As a result, ELM performs poorly in domain adaptation problems, in which the training data (source domain) and testing data (target domain) are differently distributed but somehow related. In this paper, an ELM-based space learning algorithm, domain space transfer ELM (DST-ELM), is developed to deal with unsupervised domain adaptation problems. To be specific, through DST-ELM, the source and target data are reconstructed in a domain invariant space with target data labels unavailable. Two goals are achieved simultaneously. One is that, the target data are input into an ELM-based feature space learning network, and the output is supposed to approximate the input such that the target domain structural knowledge and the intrinsic discriminative information can be preserved as much as possible. The other one is that, the source data are projected into the same space as the target data and the distribution distance between the two domains is minimized in the space. This unsupervised feature transformation network is followed by an adaptive ELM classifier which is trained from the transferred labeled source samples, and is used for target data label prediction. Moreover, the ELMs in the proposed method, including both the space learning ELM and the classifier, require just a small number of hidden nodes, thus maintaining low computation complexity. Extensive experiments on real-world image and text datasets are conducted and verify that our approach outperforms several existing domain adaptation methods in terms of accuracy while maintaining high efficiency. |
|---|---|
| AbstractList | Extreme learning machine (ELM) has been applied in a wide range of classification and regression problems due to its high accuracy and efficiency. However, ELM can only deal with cases where training and testing data are from identical distribution, while in real world situations, this assumption is often violated. As a result, ELM performs poorly in domain adaptation problems, in which the training data (source domain) and testing data (target domain) are differently distributed but somehow related. In this paper, an ELM-based space learning algorithm, domain space transfer ELM (DST-ELM), is developed to deal with unsupervised domain adaptation problems. To be specific, through DST-ELM, the source and target data are reconstructed in a domain invariant space with target data labels unavailable. Two goals are achieved simultaneously. One is that, the target data are input into an ELM-based feature space learning network, and the output is supposed to approximate the input such that the target domain structural knowledge and the intrinsic discriminative information can be preserved as much as possible. The other one is that, the source data are projected into the same space as the target data and the distribution distance between the two domains is minimized in the space. This unsupervised feature transformation network is followed by an adaptive ELM classifier which is trained from the transferred labeled source samples, and is used for target data label prediction. Moreover, the ELMs in the proposed method, including both the space learning ELM and the classifier, require just a small number of hidden nodes, thus maintaining low computation complexity. Extensive experiments on real-world image and text datasets are conducted and verify that our approach outperforms several existing domain adaptation methods in terms of accuracy while maintaining high efficiency. Extreme learning machine (ELM) has been applied in a wide range of classification and regression problems due to its high accuracy and efficiency. However, ELM can only deal with cases where training and testing data are from identical distribution, while in real world situations, this assumption is often violated. As a result, ELM performs poorly in domain adaptation problems, in which the training data (source domain) and testing data (target domain) are differently distributed but somehow related. In this paper, an ELM-based space learning algorithm, domain space transfer ELM (DST-ELM), is developed to deal with unsupervised domain adaptation problems. To be specific, through DST-ELM, the source and target data are reconstructed in a domain invariant space with target data labels unavailable. Two goals are achieved simultaneously. One is that, the target data are input into an ELM-based feature space learning network, and the output is supposed to approximate the input such that the target domain structural knowledge and the intrinsic discriminative information can be preserved as much as possible. The other one is that, the source data are projected into the same space as the target data and the distribution distance between the two domains is minimized in the space. This unsupervised feature transformation network is followed by an adaptive ELM classifier which is trained from the transferred labeled source samples, and is used for target data label prediction. Moreover, the ELMs in the proposed method, including both the space learning ELM and the classifier, require just a small number of hidden nodes, thus maintaining low computation complexity. Extensive experiments on real-world image and text datasets are conducted and verify that our approach outperforms several existing domain adaptation methods in terms of accuracy while maintaining high efficiency.Extreme learning machine (ELM) has been applied in a wide range of classification and regression problems due to its high accuracy and efficiency. However, ELM can only deal with cases where training and testing data are from identical distribution, while in real world situations, this assumption is often violated. As a result, ELM performs poorly in domain adaptation problems, in which the training data (source domain) and testing data (target domain) are differently distributed but somehow related. In this paper, an ELM-based space learning algorithm, domain space transfer ELM (DST-ELM), is developed to deal with unsupervised domain adaptation problems. To be specific, through DST-ELM, the source and target data are reconstructed in a domain invariant space with target data labels unavailable. Two goals are achieved simultaneously. One is that, the target data are input into an ELM-based feature space learning network, and the output is supposed to approximate the input such that the target domain structural knowledge and the intrinsic discriminative information can be preserved as much as possible. The other one is that, the source data are projected into the same space as the target data and the distribution distance between the two domains is minimized in the space. This unsupervised feature transformation network is followed by an adaptive ELM classifier which is trained from the transferred labeled source samples, and is used for target data label prediction. Moreover, the ELMs in the proposed method, including both the space learning ELM and the classifier, require just a small number of hidden nodes, thus maintaining low computation complexity. Extensive experiments on real-world image and text datasets are conducted and verify that our approach outperforms several existing domain adaptation methods in terms of accuracy while maintaining high efficiency. |
| Author | Chen, Yiming Song, Shiji Yang, Le Wu, Cheng Li, Shuang |
| Author_xml | – sequence: 1 givenname: Yiming orcidid: 0000-0002-8894-2902 surname: Chen fullname: Chen, Yiming email: chenyimi15@mails.tsinghua.edu.cn organization: Department of Automation, Tsinghua University, Beijing, China – sequence: 2 givenname: Shiji orcidid: 0000-0001-7361-9283 surname: Song fullname: Song, Shiji email: shijis@mail.tsinghua.edu.cn organization: Department of Automation, Tsinghua University, Beijing, China – sequence: 3 givenname: Shuang orcidid: 0000-0003-1910-7812 surname: Li fullname: Li, Shuang email: l-s12@mails.tsinghua.edu.cn organization: Department of Automation, Tsinghua University, Beijing, China – sequence: 4 givenname: Le orcidid: 0000-0001-8379-4915 surname: Yang fullname: Yang, Le email: yangle15@mails.tsinghua.edu.cn organization: Department of Automation, Tsinghua University, Beijing, China – sequence: 5 givenname: Cheng orcidid: 0000-0002-8611-2665 surname: Wu fullname: Wu, Cheng email: wuc@tsinghua.edu.cn organization: Department of Automation, Tsinghua University, Beijing, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/29993853$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kbtOAzEQRS0E4pkPQEhoJRqahLG9D7uE8JSCKAgFlTVxxmCU9QZ7I8Hfs1ECRQrcjItzRjNzD9h2aAIxdsxhwDnoi_Hw9WoggKuBULzUim-xfcFL1ReiKrb__mW1x3opfUD3Ok5ptcv2hNZaqkLus-vrpkYfsuc5WsrGEUNyFLObrzZSTdmIMAYf3rJHtO8-UOaamK2VyynOW2x9E47YjsNZot66HrKX25vx8L4_erp7GF6O-lbmuu0L6QCdnoopgANyQhYoiXgusCAOTilQjmxlC-zWkRqwBCzEhCaIeWWFPGTnq77z2HwuKLWm9snSbIaBmkUyAkolc1FA0aFnG-hHs4ihm84IrlQJoBV01OmaWkxqmpp59DXGb_N7nw6oVoCNTUqRnLF-tXMb0c8MB7MMwyzDMMswzDqMzuQb5m_z_5yTleOJ6I9XUuYKtPwBEgeSdw |
| CODEN | ITCEB8 |
| CitedBy_id | crossref_primary_10_1007_s10489_024_05376_3 crossref_primary_10_1007_s11432_020_3080_3 crossref_primary_10_3390_app13010481 crossref_primary_10_1016_j_ymssp_2021_108648 crossref_primary_10_1109_ACCESS_2020_3047448 crossref_primary_10_1109_TNNLS_2022_3151646 crossref_primary_10_1007_s13042_021_01339_z crossref_primary_10_1109_TEVC_2023_3259067 crossref_primary_10_1016_j_patcog_2024_110538 crossref_primary_10_1109_TIP_2023_3261758 crossref_primary_10_1016_j_ecoinf_2023_102206 crossref_primary_10_3390_s23136102 crossref_primary_10_1007_s12652_020_01682_z crossref_primary_10_1109_TSMC_2022_3188500 crossref_primary_10_1007_s11042_021_11007_7 crossref_primary_10_1016_j_knosys_2019_105161 crossref_primary_10_1109_JSTARS_2020_3001198 crossref_primary_10_1109_TNNLS_2019_2935384 crossref_primary_10_1109_TIE_2021_3066938 crossref_primary_10_1109_TIP_2022_3215889 crossref_primary_10_1109_TETC_2022_3210568 crossref_primary_10_1016_j_knosys_2025_113297 crossref_primary_10_1016_j_knosys_2022_109937 crossref_primary_10_1109_TCYB_2020_3036393 crossref_primary_10_1109_TMM_2020_3007340 crossref_primary_10_1007_s11063_024_11677_y crossref_primary_10_1016_j_engappai_2020_103643 crossref_primary_10_1007_s13042_023_01947_x crossref_primary_10_1109_TCYB_2019_2909480 crossref_primary_10_1109_TFUZZ_2019_2958299 crossref_primary_10_1007_s10489_021_02852_y crossref_primary_10_1109_TCYB_2023_3338266 crossref_primary_10_1109_TIE_2022_3170631 crossref_primary_10_1109_TIM_2024_3406779 crossref_primary_10_1007_s10462_025_11160_7 crossref_primary_10_1016_j_knosys_2022_110233 crossref_primary_10_1109_TCYB_2019_2921559 crossref_primary_10_1109_TMM_2025_3535346 crossref_primary_10_1016_j_ins_2021_11_061 crossref_primary_10_1109_TIM_2020_3011584 crossref_primary_10_1016_j_asoc_2020_106756 crossref_primary_10_1109_TCYB_2020_3040763 crossref_primary_10_1109_TCYB_2021_3107292 crossref_primary_10_1109_JSEN_2025_3571483 crossref_primary_10_1016_j_knosys_2019_105222 crossref_primary_10_1109_JSEN_2021_3081923 crossref_primary_10_1109_TII_2019_2899118 crossref_primary_10_1016_j_patcog_2022_108638 crossref_primary_10_1109_TNNLS_2020_2995483 crossref_primary_10_1007_s11063_022_10967_7 crossref_primary_10_1109_TMECH_2021_3124415 crossref_primary_10_1007_s11042_019_08474_4 crossref_primary_10_1109_TGRS_2024_3400959 crossref_primary_10_1016_j_measurement_2024_116074 crossref_primary_10_1109_TIE_2020_2988229 crossref_primary_10_1109_TPAMI_2022_3163338 crossref_primary_10_1016_j_knosys_2022_109678 crossref_primary_10_1109_TIM_2024_3374295 crossref_primary_10_1016_j_neunet_2020_01_009 crossref_primary_10_1109_JSEN_2023_3305314 crossref_primary_10_1109_TGRS_2021_3070050 crossref_primary_10_1007_s00521_020_04719_8 crossref_primary_10_1016_j_isatra_2022_03_008 crossref_primary_10_1109_TCYB_2022_3196308 crossref_primary_10_1109_TCYB_2021_3049609 crossref_primary_10_1007_s11063_019_10075_z crossref_primary_10_1016_j_neucom_2024_128734 crossref_primary_10_1016_j_eswa_2023_121822 crossref_primary_10_3390_math10091422 crossref_primary_10_1007_s10489_021_02385_4 crossref_primary_10_1109_TCYB_2021_3065247 crossref_primary_10_1109_TIM_2022_3160543 crossref_primary_10_1109_TIM_2023_3253897 crossref_primary_10_1109_JSEN_2023_3280202 crossref_primary_10_1109_TCYB_2021_3051005 crossref_primary_10_1109_TCYB_2021_3052536 crossref_primary_10_1109_TCYB_2020_2974106 |
| Cites_doi | 10.1007/978-3-642-15561-1_16 10.1007/s13042-015-0351-8 10.1016/j.neucom.2016.05.113 10.1016/j.neunet.2014.10.001 10.1109/TIP.2016.2516952 10.1109/TCYB.2016.2523538 10.3115/1610075.1610094 10.1109/TNN.2008.2005494 10.1109/TIP.2016.2598679 10.1007/s11263-014-0696-6 10.1109/TEVC.2002.805038 10.1109/ACII.2013.90 10.1109/TNNLS.2015.2424995 10.1145/1961189.1961199 10.1109/JBHI.2015.2425041 10.1007/s13042-016-0565-4 10.1007/978-3-319-28397-5_20 10.1109/TKDE.2009.191 10.1109/TCYB.2014.2307349 10.1109/ICCV.2013.398 10.1109/TKDE.2014.2373376 10.1109/TFUZZ.2014.2371479 10.1007/s12293-016-0188-z 10.1109/TCYB.2015.2492468 10.1016/j.neucom.2015.01.096 10.1109/TNNLS.2016.2538282 10.1109/TCYB.2013.2272399 10.1109/TKDE.2009.126 10.1109/TNN.2010.2091281 10.1007/s13042-016-0569-0 10.1109/TCYB.2015.2502483 10.1016/j.neucom.2005.12.126 10.1109/TSMCB.2011.2168604 10.1109/TSMC.2015.2406855 10.1109/ICCV.2013.274 10.1109/CVPR.2011.5995347 10.1109/TNN.2006.875977 10.1109/CVPR.2014.183 10.1109/ICCV.2013.368 10.1109/TIM.2014.2367775 10.1007/s13042-016-0509-z 10.1016/j.neucom.2017.08.040 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 7TB 8FD F28 FR3 H8D JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TCYB.2018.2816981 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library (IEL) (UW System Shared) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Mechanical & Transportation Engineering Abstracts Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Aerospace Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed Aerospace Database MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Sciences (General) |
| EISSN | 2168-2275 |
| EndPage | 1922 |
| ExternalDocumentID | 29993853 10_1109_TCYB_2018_2816981 8334809 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 41427806; 61273233 funderid: 10.13039/501100001809 – fundername: National Key Research and Development Program grantid: 2016YFB1200203 |
| GroupedDBID | 0R~ 4.4 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK AENEX AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD HZ~ IFIPE IPLJI JAVBF M43 O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION NPM RIG 7SC 7SP 7TB 8FD F28 FR3 H8D JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c349t-23f0af9d2d00f0ef235a3ee142a5e10f8808fec7c5a981390a60a52bebaa47c23 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 102 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000460667400030&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2168-2267 2168-2275 |
| IngestDate | Thu Oct 02 15:32:21 EDT 2025 Mon Jun 30 04:43:15 EDT 2025 Thu Jan 02 23:02:01 EST 2025 Sat Nov 29 02:02:25 EST 2025 Tue Nov 18 22:23:59 EST 2025 Wed Aug 27 02:51:47 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 5 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c349t-23f0af9d2d00f0ef235a3ee142a5e10f8808fec7c5a981390a60a52bebaa47c23 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-8894-2902 0000-0001-8379-4915 0000-0003-1910-7812 0000-0001-7361-9283 0000-0002-8611-2665 |
| PMID | 29993853 |
| PQID | 2188600980 |
| PQPubID | 85422 |
| PageCount | 14 |
| ParticipantIDs | ieee_primary_8334809 proquest_journals_2188600980 crossref_citationtrail_10_1109_TCYB_2018_2816981 pubmed_primary_29993853 crossref_primary_10_1109_TCYB_2018_2816981 proquest_miscellaneous_2068342505 |
| PublicationCentury | 2000 |
| PublicationDate | 2019-05-01 |
| PublicationDateYYYYMMDD | 2019-05-01 |
| PublicationDate_xml | – month: 05 year: 2019 text: 2019-05-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: Piscataway |
| PublicationTitle | IEEE transactions on cybernetics |
| PublicationTitleAbbrev | TCYB |
| PublicationTitleAlternate | IEEE Trans Cybern |
| PublicationYear | 2019 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref56 ref12 ref15 ref14 kasun (ref46) 2013; 28 ref55 ref11 ref17 ref18 donahue (ref57) 2014; 32 bergamo (ref10) 2010 ref51 ref50 saenko (ref52) 2010 ref45 ref47 ref42 ref41 ref43 huang (ref27) 2004; 2 ref8 ref7 jhuo (ref16) 2012 ref9 ref4 griffin (ref54) 2007 ref3 ref40 gong (ref53) 2012 ref35 ref37 ref36 zhang (ref34) 2016 gretton (ref48) 2007; 19 chen (ref44) 2015; 16 ref31 blitzer (ref5) 2007; 7 ref30 ref33 ref32 ref1 ref39 ref38 li (ref49) 0 ref24 ref26 ref25 ref20 ref22 cawley (ref2) 2010; 11 ref21 krizhevsky (ref23) 2012 ref28 ref29 duan (ref6) 2009 huang (ref19) 2007; 19 |
| References_xml | – start-page: 213 year: 2010 ident: ref52 article-title: Adapting visual category models to new domains publication-title: Computer Vision-ECCV 2010 doi: 10.1007/978-3-642-15561-1_16 – ident: ref47 doi: 10.1007/s13042-015-0351-8 – ident: ref39 doi: 10.1016/j.neucom.2016.05.113 – ident: ref29 doi: 10.1016/j.neunet.2014.10.001 – ident: ref18 doi: 10.1109/TIP.2016.2516952 – ident: ref42 doi: 10.1109/TCYB.2016.2523538 – ident: ref4 doi: 10.3115/1610075.1610094 – ident: ref9 doi: 10.1109/TNN.2008.2005494 – ident: ref41 doi: 10.1109/TIP.2016.2598679 – ident: ref17 doi: 10.1007/s11263-014-0696-6 – ident: ref1 doi: 10.1109/TEVC.2002.805038 – ident: ref45 doi: 10.1109/ACII.2013.90 – ident: ref35 doi: 10.1109/TNNLS.2015.2424995 – volume: 2 start-page: 985 year: 2004 ident: ref27 article-title: Extreme learning machine: A new learning scheme of feedforward neural networks publication-title: Proc IEEE Int Joint Conf Neural Netw – year: 0 ident: ref49 article-title: Cross-domain extreme learning machines for domain adaptation publication-title: IEEE Trans Syst Man Cybern Syst – ident: ref56 doi: 10.1145/1961189.1961199 – start-page: 1375 year: 2009 ident: ref6 article-title: Domain transfer SVM for video concept detection publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – ident: ref25 doi: 10.1109/JBHI.2015.2425041 – ident: ref22 doi: 10.1007/s13042-016-0565-4 – volume: 19 start-page: 601 year: 2007 ident: ref19 article-title: Correcting sample selection bias by unlabeled data publication-title: Proc Adv Neural Inf Process Syst – start-page: 249 year: 2016 ident: ref34 article-title: SVM and ELM: Who wins? Object recognition with deep convolutional features from ImageNet publication-title: Proceedings of ELM-2015 Volume 1 doi: 10.1007/978-3-319-28397-5_20 – ident: ref7 doi: 10.1109/TKDE.2009.191 – start-page: 181 year: 2010 ident: ref10 article-title: Exploiting weakly-labeled Web images to improve object classification: A domain adaptation approach publication-title: Proc Adv Neural Inf Process Syst – volume: 11 start-page: 2079 year: 2010 ident: ref2 article-title: On over-fitting in model selection and subsequent selection bias in performance evaluation publication-title: J Mach Learn Res – ident: ref38 doi: 10.1109/TCYB.2014.2307349 – ident: ref50 doi: 10.1109/ICCV.2013.398 – ident: ref55 doi: 10.1109/TKDE.2014.2373376 – ident: ref3 doi: 10.1109/TFUZZ.2014.2371479 – ident: ref43 doi: 10.1007/s12293-016-0188-z – volume: 7 start-page: 440 year: 2007 ident: ref5 article-title: Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification publication-title: Proc ACL – ident: ref36 doi: 10.1109/TCYB.2015.2492468 – ident: ref30 doi: 10.1016/j.neucom.2015.01.096 – ident: ref20 doi: 10.1109/TNNLS.2016.2538282 – volume: 28 start-page: 31 year: 2013 ident: ref46 article-title: Representational learning with ELMs for big data publication-title: IEEE Intell Syst – ident: ref21 doi: 10.1109/TCYB.2013.2272399 – ident: ref51 doi: 10.1109/TKDE.2009.126 – ident: ref11 doi: 10.1109/TNN.2010.2091281 – ident: ref26 doi: 10.1007/s13042-016-0569-0 – start-page: 1097 year: 2012 ident: ref23 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst – ident: ref14 doi: 10.1109/TCYB.2015.2502483 – ident: ref28 doi: 10.1016/j.neucom.2005.12.126 – start-page: 2168 year: 2012 ident: ref16 article-title: Robust visual domain adaptation with low-rank reconstruction publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – ident: ref33 doi: 10.1109/TSMCB.2011.2168604 – ident: ref24 doi: 10.1109/TSMC.2015.2406855 – ident: ref12 doi: 10.1109/ICCV.2013.274 – start-page: 2066 year: 2012 ident: ref53 article-title: Geodesic flow kernel for unsupervised domain adaptation publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – ident: ref8 doi: 10.1109/CVPR.2011.5995347 – ident: ref31 doi: 10.1109/TNN.2006.875977 – volume: 19 start-page: 513 year: 2007 ident: ref48 article-title: A kernel method for the two-sample-problem publication-title: Proc Adv Neural Inf Process Syst – volume: 16 start-page: 3849 year: 2015 ident: ref44 article-title: Marginalizing stacked linear denoising autoencoders publication-title: J Mach Learn Res – volume: 32 start-page: 647 year: 2014 ident: ref57 article-title: DeCAF: A deep convolutional activation feature for generic visual recognition publication-title: Proc ICML – ident: ref13 doi: 10.1109/CVPR.2014.183 – ident: ref15 doi: 10.1109/ICCV.2013.368 – ident: ref40 doi: 10.1109/TIM.2014.2367775 – ident: ref37 doi: 10.1007/s13042-016-0509-z – year: 2007 ident: ref54 article-title: Caltech-256 object category dataset – ident: ref32 doi: 10.1016/j.neucom.2017.08.040 |
| SSID | ssj0000816898 |
| Score | 2.523648 |
| Snippet | Extreme learning machine (ELM) has been applied in a wide range of classification and regression problems due to its high accuracy and efficiency. However, ELM... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1909 |
| SubjectTerms | Adaptation Adaptation models Algorithms Classifiers Distributed databases Domain adaptation Domains extreme learning machine (ELM) Image reconstruction Machine learning maximum mean discrepancy (MMD) Neural networks space learning Task analysis Testing Training |
| Title | Domain Space Transfer Extreme Learning Machine for Domain Adaptation |
| URI | https://ieeexplore.ieee.org/document/8334809 https://www.ncbi.nlm.nih.gov/pubmed/29993853 https://www.proquest.com/docview/2188600980 https://www.proquest.com/docview/2068342505 |
| Volume | 49 |
| WOSCitedRecordID | wos000460667400030&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE/IET Electronic Library (IEL) customDbUrl: eissn: 2168-2275 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000816898 issn: 2168-2267 databaseCode: RIE dateStart: 20130101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NaxQxFH-sxYMXa63aqe0SwYOK02YmmUly3HZbvFgEK6ynIcm8yIKdXfZD_PNNMtkBoRa8BeZlJryPyS95XwBvvR6Y1kibOyNc7hExzz2IlXmtqWp1gcgLE5tNiJsbOZupLyP4OOTCIGIMPsOzMIy-_HZht-Gq7FyGtNGQrfdIiLrP1RruU2IDidj6tvSD3KMKkZyYBVXnt5ffL0IclzwrPZ2SoUGM_xErJiv2144UW6z8G23GXed6___W-wyeJnRJJr06HMAIu-dwkOx3Td6lItPvD2E6XdzpeUe--jMzkrhjOVyRq9-bcGFIUtnVH-RzDLZE4rEtSVMmrV72DvwX8O366vbyU546KuSWcbXJS-aodqotW0odRVeySjPEgpe6woI6b8zSoRW20p5HTFFdU12VBo3WXNiSvYS9btHhEZBQd4dXDA1XLXfOKOPaSgilrTG1dHUGdMfVxqZy46Hrxc8mHjuoaoJMmiCTJskkgw_DlGVfa-Mh4sPA8IEw8TqDk53ommSN68bDGFmHyqk0gzfDY29HwTmiO1xsPQ2tJeMBEGbwqhf58O6dphzf_83X8MSvTPVhkCewt1lt8RQe21-b-Xo19so6k-OorH8AdErh4w |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3bS90wGP8QN9AXN-dl3dyWwR6m2GPapG3y6LygqAdhR9CnkqRfhuB65FzG_vwlaU5hoIO9BfqlDd-l-SXfDeCL0wPdaGFSqyubOkTMUwdiRVoqKhuVIfJMh2YT1XAobm_l9RLs97kwiBiCz3Dgh8GX34zN3F-VHQifNuqz9V4U3J17umyt_kYltJAIzW9zN0gdrqiiGzOj8mB0dPfNR3KJQe7opPAtYtyvWDJRsL_2pNBk5Xm8Gfad01f_t-LXsBbxJTnsFGIdlrB9A-vRgqfkaywzvbsBx8fjn-q-Jd_dqRlJ2LMsTsjJ75m_MiSx8OoPchXCLZE4dEvilMNGPXYu_E24OT0ZHZ2lsadCahiXszRnliorm7yh1FK0OSsUQ8x4rgrMqHXmLCyayhTK8YhJqkqqilyjVopXJmdbsNyOW3wLxFfe4QVDzWXDrdVS26aoKqmM1qWwZQJ0wdXaxILjvu_FQx0OHlTWXia1l0kdZZLAXj_lsau28S_iDc_wnjDyOoGdhejqaI_T2gEZUfraqTSBz_1jZ0nePaJaHM8dDS0F4x4SJrDdibx_90JT3j39zU-wcja6uqwvz4cX72HVrVJ2QZE7sDybzPEDvDS_ZvfTycegsn8ACffkQg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Domain+Space+Transfer+Extreme+Learning+Machine+for+Domain+Adaptation&rft.jtitle=IEEE+transactions+on+cybernetics&rft.au=Chen%2C+Yiming&rft.au=Song%2C+Shiji&rft.au=Li%2C+Shuang&rft.au=Yang%2C+Le&rft.date=2019-05-01&rft.pub=IEEE&rft.issn=2168-2267&rft.volume=49&rft.issue=5&rft.spage=1909&rft.epage=1922&rft_id=info:doi/10.1109%2FTCYB.2018.2816981&rft_id=info%3Apmid%2F29993853&rft.externalDocID=8334809 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2168-2267&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2168-2267&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2168-2267&client=summon |