Dimension Reduction With Extreme Learning Machine
Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP),...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on image processing Jg. 25; H. 8; S. 3906 - 3918 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
IEEE
01.08.2016
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error. |
|---|---|
| AbstractList | Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error. Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error. |
| Author | Yang, Yan Kasun, Liyanaarachchi Lekamalage Chamara Huang, Guang-Bin Zhang, Zhengyou |
| Author_xml | – sequence: 1 givenname: Liyanaarachchi Lekamalage Chamara orcidid: 0000-0002-4078-3877 surname: Kasun fullname: Kasun, Liyanaarachchi Lekamalage Chamara email: chamarak001@ntu.edu.sg organization: School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore – sequence: 2 givenname: Yan surname: Yang fullname: Yang, Yan email: y.yang@nwpu.edu.cn organization: Energy Research Institute, Nanyang Technological University, Singapore – sequence: 3 givenname: Guang-Bin surname: Huang fullname: Huang, Guang-Bin email: egbhuangg@ntu.edu.sg organization: School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore – sequence: 4 givenname: Zhengyou surname: Zhang fullname: Zhang, Zhengyou email: zhang@microsoft.com organization: Microsoft Corporation, Redmond, WA, USA |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/27214902$$D View this record in MEDLINE/PubMed |
| BookMark | eNqNkUtrHDEQhEWw8fseCISFXHyZdUujx-gY_IY1NsbGR6GRemOZHY0jzUDy76PxrnPwIeSkRnxVTVftk63YRyTkM4U5paBPHq7v5gyonDOhQEj9iexRzWkFwNlWmUGoSlGud8l-zi8AlAsqd8guU6z8Atsj9Cx0GHPo4-we_eiGaXoKw_Ps_NeQsMPZAm2KIf6Y3Vj3HCIeku2lXWU82rwH5PHi_OH0qlrcXl6ffl9Urm5gqKxiDfd1a6WXjLWOg29c2Sut843gQK2ymiupee3BW7TC-lbjUnPZtL7l9QE5Xvu-pv7niHkwXcgOVysbsR-zoQ0TopZc6v9AAUo-ik2u3z6gL_2YYjnkjSoRSSEL9XVDjW2H3rym0Nn027znVgC5Blzqc064NC4MdgpvSDasDAUzFWRKQWYqyGwKKkL4IHz3_ofky1oSEPEvrngpVqr6D-rVl9U |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1016_j_patcog_2023_109663 crossref_primary_10_1007_s11063_018_9869_6 crossref_primary_10_1109_JSTSP_2018_2873988 crossref_primary_10_1016_j_isci_2024_109148 crossref_primary_10_1016_j_neucom_2020_06_110 crossref_primary_10_1016_j_cose_2022_102658 crossref_primary_10_1016_j_neucom_2020_12_064 crossref_primary_10_1109_ACCESS_2019_2894014 crossref_primary_10_1007_s12559_017_9474_4 crossref_primary_10_1016_j_jksuci_2019_09_001 crossref_primary_10_1109_TSTE_2021_3054125 crossref_primary_10_1007_s11045_022_00820_4 crossref_primary_10_1007_s11760_025_03954_7 crossref_primary_10_1115_1_4067270 crossref_primary_10_1016_j_asoc_2025_113789 crossref_primary_10_1016_j_neucom_2019_05_098 crossref_primary_10_1016_j_patcog_2018_02_010 crossref_primary_10_1007_s00417_023_06049_6 crossref_primary_10_3390_s23187772 crossref_primary_10_1007_s40747_023_01065_9 crossref_primary_10_1016_j_neunet_2021_12_018 crossref_primary_10_1049_ipr2_12101 crossref_primary_10_1016_j_knosys_2019_04_008 crossref_primary_10_1145_3634813 crossref_primary_10_1002_for_2663 crossref_primary_10_1016_j_neunet_2020_06_009 crossref_primary_10_1049_enc2_12015 crossref_primary_10_1109_TNNLS_2018_2869974 crossref_primary_10_1016_j_bspc_2023_104663 crossref_primary_10_3390_en11071807 crossref_primary_10_1016_j_cmpb_2019_04_029 crossref_primary_10_1016_j_engappai_2020_104062 crossref_primary_10_1016_j_bspc_2021_102919 crossref_primary_10_1007_s11042_018_6500_9 crossref_primary_10_1016_j_engappai_2023_105899 crossref_primary_10_1016_j_patcog_2019_07_005 crossref_primary_10_1016_j_cose_2021_102473 crossref_primary_10_1007_s12559_018_9557_x crossref_primary_10_1088_1361_6579_ab45c8 crossref_primary_10_1016_j_neucom_2017_02_025 crossref_primary_10_1109_TSMC_2017_2718220 crossref_primary_10_1038_s41598_024_77080_8 crossref_primary_10_1109_TIM_2020_3013129 crossref_primary_10_1016_j_neunet_2019_04_015 crossref_primary_10_1016_j_knosys_2021_107385 crossref_primary_10_1109_TII_2018_2874477 crossref_primary_10_1155_2017_7406568 crossref_primary_10_1016_j_engappai_2024_109143 crossref_primary_10_1007_s11265_020_01571_w crossref_primary_10_1016_j_neucom_2019_01_056 crossref_primary_10_1109_TKDE_2017_2785784 crossref_primary_10_1007_s10115_021_01625_w crossref_primary_10_3233_JIFS_189581 crossref_primary_10_1007_s00500_022_07745_x crossref_primary_10_1007_s12559_018_9598_1 crossref_primary_10_1016_j_jtherbio_2018_09_016 crossref_primary_10_1145_3495164 crossref_primary_10_1007_s00034_021_01697_7 crossref_primary_10_1007_s11432_022_3579_1 crossref_primary_10_1016_j_neucom_2021_07_065 crossref_primary_10_1038_s41598_025_95678_4 crossref_primary_10_1007_s11554_018_0793_9 crossref_primary_10_1109_TSMC_2019_2931003 crossref_primary_10_1109_ACCESS_2022_3178709 crossref_primary_10_1007_s10489_022_04333_2 crossref_primary_10_1007_s11042_021_10567_y crossref_primary_10_1109_ACCESS_2024_3434954 crossref_primary_10_1109_ACCESS_2017_2706363 crossref_primary_10_1109_TNNLS_2016_2636834 crossref_primary_10_1109_ACCESS_2018_2848966 crossref_primary_10_1109_TGRS_2019_2900509 crossref_primary_10_1016_j_apenergy_2022_120385 crossref_primary_10_3390_math11081777 crossref_primary_10_1109_TSMC_2017_2680404 crossref_primary_10_1016_j_ymssp_2023_110957 crossref_primary_10_1049_iet_ipr_2019_1016 crossref_primary_10_3390_rs9121255 crossref_primary_10_1016_j_neunet_2019_01_007 crossref_primary_10_1109_ACCESS_2025_3541271 crossref_primary_10_1109_TSP_2020_3039599 crossref_primary_10_1016_j_bspc_2022_104191 crossref_primary_10_1016_j_jvcir_2019_102598 crossref_primary_10_1080_03610918_2025_2524548 crossref_primary_10_1145_3340268 crossref_primary_10_3390_s20051262 crossref_primary_10_1080_00949655_2025_2534610 crossref_primary_10_1007_s00500_018_3109_x crossref_primary_10_1109_TNNLS_2017_2654357 crossref_primary_10_1007_s11042_020_09438_9 crossref_primary_10_1109_TNNLS_2020_3015860 crossref_primary_10_1109_ACCESS_2020_2965284 crossref_primary_10_1016_j_matt_2020_06_011 crossref_primary_10_1109_TIFS_2017_2766583 crossref_primary_10_1016_j_knosys_2017_05_013 crossref_primary_10_1186_s40537_022_00640_0 crossref_primary_10_1016_j_infrared_2019_103070 crossref_primary_10_1109_ACCESS_2018_2810849 crossref_primary_10_1007_s13369_021_06484_9 crossref_primary_10_3390_s19245535 crossref_primary_10_1007_s10489_022_04284_8 crossref_primary_10_1016_j_patcog_2019_03_005 crossref_primary_10_3390_computers8010002 crossref_primary_10_1109_ACCESS_2021_3059858 crossref_primary_10_1016_j_bspc_2021_102608 crossref_primary_10_1007_s00521_023_08992_1 crossref_primary_10_1109_JSAC_2019_2951932 crossref_primary_10_1109_TAFFC_2019_2944603 crossref_primary_10_1016_j_neucom_2021_03_110 crossref_primary_10_1007_s12559_018_9601_x crossref_primary_10_1016_j_measurement_2020_108276 crossref_primary_10_1109_ACCESS_2023_3253432 crossref_primary_10_1007_s11042_019_7330_0 crossref_primary_10_1016_j_neucom_2021_08_052 crossref_primary_10_12677_JISP_2024_131006 crossref_primary_10_1016_j_measurement_2021_110565 crossref_primary_10_1109_TKDE_2018_2877746 crossref_primary_10_1016_j_neucom_2018_05_066 crossref_primary_10_1109_TCYB_2017_2727278 crossref_primary_10_1109_TNSRE_2024_3485186 crossref_primary_10_1007_s12065_018_0190_0 crossref_primary_10_1007_s10489_021_02915_0 crossref_primary_10_1109_ACCESS_2019_2940697 crossref_primary_10_1109_ACCESS_2020_2998478 crossref_primary_10_1016_j_neunet_2019_11_007 crossref_primary_10_1016_j_apm_2020_01_063 crossref_primary_10_3390_computers13010025 crossref_primary_10_1016_j_neucom_2020_03_045 crossref_primary_10_1016_j_knosys_2021_107182 crossref_primary_10_1007_s40747_021_00486_8 crossref_primary_10_1007_s11063_018_9809_5 crossref_primary_10_1007_s13042_022_01698_1 crossref_primary_10_1016_j_eswa_2019_112845 crossref_primary_10_1016_j_neucom_2017_08_040 crossref_primary_10_1016_j_neucom_2020_04_052 crossref_primary_10_1016_j_neucom_2019_11_105 crossref_primary_10_1109_JSTARS_2023_3308031 crossref_primary_10_1016_j_engappai_2022_105323 crossref_primary_10_1109_ACCESS_2020_3010233 crossref_primary_10_1049_joe_2019_0320 crossref_primary_10_1016_j_neucom_2020_05_021 crossref_primary_10_1109_ACCESS_2020_2985381 crossref_primary_10_3390_agriculture10110517 crossref_primary_10_1016_j_neunet_2019_11_015 crossref_primary_10_3390_s23218976 crossref_primary_10_1007_s11042_024_19291_9 crossref_primary_10_1109_ACCESS_2019_2936856 crossref_primary_10_1155_2022_7693393 crossref_primary_10_1016_j_asoc_2024_111378 crossref_primary_10_1016_j_neucom_2018_12_078 crossref_primary_10_1016_j_compeleceng_2020_106891 crossref_primary_10_3390_s20041185 crossref_primary_10_1109_TSG_2023_3286697 crossref_primary_10_1007_s13042_019_01057_7 |
| Cites_doi | 10.1007/s12559-014-9255-2 10.1093/qmath/11.1.50 10.1016/0893-6080(89)90014-2 10.1109/CVPR.2004.1315150 10.1145/1390156.1390224 10.1016/j.neucom.2005.12.126 10.1109/TPAMI.2013.50 10.1037/h0071325 10.1145/375551.375608 10.1109/TSMCB.2011.2168604 10.1145/1015330.1015408 10.1007/s12559-015-9333-0 10.1111/j.1469-1809.1936.tb02137.x 10.1523/JNEUROSCI.2753-12.2013 10.1038/nature12160 10.1038/323533a0 10.1038/nature09868 10.1007/BF02288367 10.1090/conm/026/737400 10.1080/14786440109462720 10.1109/TNN.2006.875977 10.1109/5.726791 10.1023/A:1012470815092 10.1109/MCI.2015.2405316 10.1137/0702016 10.1126/science.1127647 10.1109/TNNLS.2015.2424995 10.1109/TIT.2009.2027527 10.1126/science.1225266 10.1038/44565 10.1016/j.conb.2016.01.010 10.1109/TCYB.2014.2307349 10.1007/BF00994018 10.1016/S0042-6989(97)00169-7 10.1016/j.neucom.2007.02.009 10.1016/j.neucom.2007.10.008 10.1109/TNNLS.2011.2178124 10.1016/j.neunet.2015.06.002 10.1023/A:1018628609742 10.1561/2200000006 10.1109/18.661502 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 F28 FR3 |
| DOI | 10.1109/TIP.2016.2570569 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic ANTE: Abstracts in New Technology & Engineering Engineering Research Database |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic Engineering Research Database ANTE: Abstracts in New Technology & Engineering |
| DatabaseTitleList | PubMed Technology Research Database MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 3918 |
| ExternalDocumentID | 4102750041 27214902 10_1109_TIP_2016_2570569 7471467 |
| Genre | orig-research Journal Article |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION AAYOK NPM PKN RIG Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 F28 FR3 |
| ID | FETCH-LOGICAL-c380t-a7284d3ba6d622bc40d8c2726acd85401a7a9476943d0daea5adb9ef9468bdb43 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 188 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000380028000011&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Wed Oct 01 14:59:50 EDT 2025 Sat Sep 27 21:01:25 EDT 2025 Sun Nov 30 05:06:23 EST 2025 Wed Feb 19 02:09:23 EST 2025 Sat Nov 29 03:21:03 EST 2025 Tue Nov 18 22:14:29 EST 2025 Tue Aug 26 16:43:04 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 8 |
| Keywords | Extreme learning machine (ELM) auto-encoder (AE) dimension reduction principal component analysis (PCA) random projection (RP) non-negative matrix factorization (NMF) |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c380t-a7284d3ba6d622bc40d8c2726acd85401a7a9476943d0daea5adb9ef9468bdb43 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-4078-3877 |
| PMID | 27214902 |
| PQID | 1800001656 |
| PQPubID | 85429 |
| PageCount | 13 |
| ParticipantIDs | crossref_primary_10_1109_TIP_2016_2570569 proquest_miscellaneous_1800705724 pubmed_primary_27214902 ieee_primary_7471467 crossref_citationtrail_10_1109_TIP_2016_2570569 proquest_journals_1800001656 proquest_miscellaneous_1825536469 |
| PublicationCentury | 2000 |
| PublicationDate | 2016-08-01 |
| PublicationDateYYYYMMDD | 2016-08-01 |
| PublicationDate_xml | – month: 08 year: 2016 text: 2016-08-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2016 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | serre (ref37) 2002 ref53 ref52 ref55 ref54 rifai (ref18) 2011 ref17 vincent (ref9) 2010; 11 ref19 ref51 ref46 ref45 ref48 ref47 lee (ref15) 2008; 20 ref44 ref43 bengio (ref13) 2007; 19 rao (ref38) 1971 ref49 ref8 ref7 ref4 ref6 ref5 ref40 ref35 ref34 ref31 ref30 ref33 ref32 ref2 ref1 ref39 chen (ref10) 2012 hinton (ref42) 2006; 313 kasun (ref36) 2013; 28 hinton (ref11) 1993 goodfellow (ref16) 2009; 22 ref24 lee (ref41) 2001 ref23 le (ref50) 2011 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ranzato (ref14) 2006 ref29 le cun (ref12) 1987 lee (ref3) 1999; 401 |
| References_xml | – ident: ref25 doi: 10.1007/s12559-014-9255-2 – volume: 20 start-page: 873 year: 2008 ident: ref15 article-title: Sparse deep belief net model for visual area V2 publication-title: Proc Adv Neural Inf Process Syst – ident: ref47 doi: 10.1093/qmath/11.1.50 – ident: ref6 doi: 10.1016/0893-6080(89)90014-2 – ident: ref53 doi: 10.1109/CVPR.2004.1315150 – start-page: 767 year: 2012 ident: ref10 article-title: Marginalized denoising autoencoders for domain adaptation publication-title: Proc 29th Int Conf Mach Learn – ident: ref17 doi: 10.1145/1390156.1390224 – start-page: 556 year: 2001 ident: ref41 article-title: Algorithms for non-negative matrix factorization publication-title: Proc Neural Inf Process Syst – ident: ref23 doi: 10.1016/j.neucom.2005.12.126 – year: 1987 ident: ref12 article-title: Modèles connexionnistes de l'apprentissage [Connectionist Models of Learning] – ident: ref8 doi: 10.1109/TPAMI.2013.50 – ident: ref2 doi: 10.1037/h0071325 – ident: ref5 doi: 10.1145/375551.375608 – ident: ref24 doi: 10.1109/TSMCB.2011.2168604 – ident: ref48 doi: 10.1145/1015330.1015408 – ident: ref26 doi: 10.1007/s12559-015-9333-0 – ident: ref54 doi: 10.1111/j.1469-1809.1936.tb02137.x – ident: ref31 doi: 10.1523/JNEUROSCI.2753-12.2013 – start-page: 3 year: 1993 ident: ref11 article-title: Autoencoders, minimum description length and Helmholtz free energy publication-title: Proc Neural Inf Process Syst – volume: 28 start-page: 31 year: 2013 ident: ref36 article-title: Representational learning with extreme learning machine for big data publication-title: IEEE Intell Syst – ident: ref32 doi: 10.1038/nature12160 – ident: ref34 doi: 10.1038/323533a0 – start-page: 1017 year: 2011 ident: ref50 article-title: CA with reconstruction cost for efficient overcomplete feature learning publication-title: Proc Neural Inf Process Syst – ident: ref29 doi: 10.1038/nature09868 – ident: ref46 doi: 10.1007/BF02288367 – ident: ref4 doi: 10.1090/conm/026/737400 – ident: ref1 doi: 10.1080/14786440109462720 – start-page: 1137 year: 2006 ident: ref14 article-title: Efficient learning of sparse representations with an energy-based model publication-title: Proc Neural Inf Process Syst – ident: ref19 doi: 10.1109/TNN.2006.875977 – ident: ref52 doi: 10.1109/5.726791 – ident: ref51 doi: 10.1023/A:1012470815092 – ident: ref28 doi: 10.1109/MCI.2015.2405316 – ident: ref45 doi: 10.1137/0702016 – volume: 313 start-page: 504 year: 2006 ident: ref42 article-title: Reducing the dimensionality of data with neural networks publication-title: Science doi: 10.1126/science.1127647 – ident: ref27 doi: 10.1109/TNNLS.2015.2424995 – ident: ref55 doi: 10.1109/TIT.2009.2027527 – start-page: 833 year: 2011 ident: ref18 article-title: Contractive auto-encoders: Explicit invariance during feature extraction publication-title: Proc Int Conf Mach Learn – ident: ref30 doi: 10.1126/science.1225266 – volume: 19 start-page: 153 year: 2007 ident: ref13 article-title: Greedy layer-wise training of deep networks publication-title: Proc Adv Neural Inf Process Syst – volume: 401 start-page: 788 year: 1999 ident: ref3 article-title: Learning the parts of objects by non-negative matrix factorization publication-title: Nature doi: 10.1038/44565 – ident: ref33 doi: 10.1016/j.conb.2016.01.010 – year: 1971 ident: ref38 publication-title: Generalized Inverse of Matrices and its Applications – ident: ref44 doi: 10.1109/TCYB.2014.2307349 – ident: ref39 doi: 10.1007/BF00994018 – volume: 22 start-page: 646 year: 2009 ident: ref16 article-title: Measuring invariances in deep networks publication-title: Proc Adv Neural Inf Process Syst – ident: ref49 doi: 10.1016/S0042-6989(97)00169-7 – ident: ref20 doi: 10.1016/j.neucom.2007.02.009 – ident: ref22 doi: 10.1016/j.neucom.2007.10.008 – ident: ref21 doi: 10.1109/TNNLS.2011.2178124 – ident: ref43 doi: 10.1016/j.neunet.2015.06.002 – year: 2002 ident: ref37 publication-title: Matrices Theory and Applications – ident: ref40 doi: 10.1023/A:1018628609742 – volume: 11 start-page: 3371 year: 2010 ident: ref9 article-title: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion publication-title: J Mach Learn Res – ident: ref7 doi: 10.1561/2200000006 – ident: ref35 doi: 10.1109/18.661502 |
| SSID | ssj0014516 |
| Score | 2.6047199 |
| Snippet | Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 3906 |
| SubjectTerms | Algorithms auto-encoder (AE) Dimension reduction Extreme Learning Machine (ELM) Learning Machine learning Machine learning algorithms Mathematical model Neural networks Noise Non-negative Matrix Factorization (NMF) Nonlinearity Object recognition Principal component analysis Principal Component Analysis (PCA) Principal components analysis random projection (RP) Reduction Regression analysis Subspaces Support vector machines |
| Title | Dimension Reduction With Extreme Learning Machine |
| URI | https://ieeexplore.ieee.org/document/7471467 https://www.ncbi.nlm.nih.gov/pubmed/27214902 https://www.proquest.com/docview/1800001656 https://www.proquest.com/docview/1800705724 https://www.proquest.com/docview/1825536469 |
| Volume | 25 |
| WOSCitedRecordID | wos000380028000011&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8QwEB508aAH34_1RQUvgnW3bZo0R1EXBRURH3sraRJ1QXbF7Yo_35k0LR5U8FbotA3zaOaVbwD2mUGliVUcYmxAAYrB_6CSPFTobbBYy0K7csHDpbi-zvp9eTMFh81ZGGutaz6zR3TpavlmpCeUKutQAIWGPQ3TQvDqrFZTMaCBs66ymYoQqWRdkuzKzt3FDfVw8SOa2JZyAgqNMfBh0udS6t3IjVf53dN0O05v4X9rXYR571kGx5UqLMGUHS7DgvcyA2_D42WY-wZBuALRKcH7U8osuCUUV5JT8DgoX4Kzz5Jyh4FHYH0OrlzfpV2F-97Z3cl56McohDrJumWoBG5BJikUNzyOC40CyTQygCttMnTYIiWUZIJLlpiuUValyhTSPknGs8IULFmD1nA0tBsQRMLYNDWEEsbQ8xJFwjiLjBDSRMjbog2dmp259hjjNOriNXexRlfmKIucZJF7WbThoHnircLX-IN2hfjc0HkWt2G7lljuDXCcR5mrW6C32oa95jaaDtVD1NCOJhUNvljE7C8ajLkSzujr65U2NN-vlWjz53VtwSytvuoW3IZW-T6xOzCjP8rB-H0Xdbif7Tod_gL36udt |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dT9swED8BQwIexteADsaCxMukpW0Sx44fEQOB1lYV6jbeIsf2oBJqUZsi_nzuHCfaAyDtLVIusXN3ju_LvwM4ZQaVJlZxiL4BOSgG_4NK8lChtcFiLQvt0gW_e2IwyG5v5XAJvjdnYay1rvjMtunS5fLNVC8oVNYhBwoX9jJ8oM5Z_rRWkzOglrMut5mKEOlknZTsys7oekhVXLxNPdtSTlChMbo-TPpoSr0fuQYrb9uabs-53Py_2W7BR29bBmeVMmzDkp3swKa3MwO_iuc7sPEPCOEuRD8I4J-CZsEN4biSpII_4_I-uHguKXoYeAzWu6DvKi_tJ_h1eTE6vwp9I4VQJ1m3DJXATcgkheKGx3GhUSSZRgZwpU2GJlukhJJMcMkS0zXKqlSZQtq_kvGsMAVL9mBlMp3YAwgiYWyaGsIJY2h7iSJhnEVGCGki5G3Rgk7Nzlx7lHFqdvGQO2-jK3OURU6yyL0sWvCteeKxQth4h3aX-NzQeRa34KiWWO6X4DyPMpe5QHu1BSfNbVw8lBFREztdVDT4YhGz92jQ60o4o9H3K21oxq-V6PPr8_oKa1ejfi_vXQ9-HsI6fUlVO3gEK-VsYb_Aqn4qx_PZsdPkF7rX6c4 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Dimension+Reduction+With+Extreme+Learning+Machine&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Kasun%2C+Liyanaarachchi+Lekamalage+Chamara&rft.au=Yang%2C+Yan&rft.au=Huang%2C+Guang-Bin&rft.au=Zhang%2C+Zhengyou&rft.date=2016-08-01&rft.issn=1941-0042&rft.eissn=1941-0042&rft.volume=25&rft.issue=8&rft.spage=3906&rft_id=info:doi/10.1109%2FTIP.2016.2570569&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |