Novel Maximum-Margin Training Algorithms for Supervised Neural Networks
This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on neural networks Jg. 21; H. 6; S. 972 - 984 |
|---|---|
| Hauptverfasser: | , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York, NY
IEEE
01.06.2010
Institute of Electrical and Electronics Engineers |
| Schlagworte: | |
| ISSN: | 1045-9227, 1941-0093, 1941-0093 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp -norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O ( N ) while usual SVM training methods have time complexity O ( N 3 ) and space complexity O ( N 2 ) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate. |
|---|---|
| AbstractList | This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp -norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O ( N ) while usual SVM training methods have time complexity O ( N 3 ) and space complexity O ( N 2 ) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate. This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on L p -norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O ( N ) while usual SVM training methods have time complexity O ( N 3 ) and space complexity O ( N 2 ) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate. This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate. |
| Author | Ludwig, Oswaldo Nunes, Urbano |
| Author_xml | – sequence: 1 givenname: Oswaldo surname: Ludwig fullname: Ludwig, Oswaldo email: oludwig@isr.uc.pt organization: Dept. of Electr. & Comput. Eng., Univ. of Coimbra Polo II, Coimbra, Portugal – sequence: 2 givenname: Urbano surname: Nunes fullname: Nunes, Urbano email: urbano@isr.uc.pt organization: Dept. of Electr. & Comput. Eng., Univ. of Coimbra Polo II, Coimbra, Portugal |
| BackLink | http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=22847823$$DView record in Pascal Francis https://www.ncbi.nlm.nih.gov/pubmed/20409990$$D View this record in MEDLINE/PubMed |
| BookMark | eNqF0UtP3DAUBWALUfHeV6pUZVOxClw_Ey8R4lFpGBYM68hxbqZuk3hqJxT-PR7NFCQW7era0ndsy-eQ7A5-QEI-UzijFPT5Yj4_Y5B2DIQSjO-QA6oFzQE0301rEDLXjBX75DDGnwBUSFB7ZD9x0FrDAbmZ-yfssjvz7Pqpz-9MWLohWwTjBjcss4tu6YMbf_Qxa33IHqYVhicXscnmOAXTpTH-8eFXPCafWtNFPNnOI_J4fbW4vM1n9zffLy9muRVQjrkom0YKrRqmqYX0dFnw2mptW5a2NdaiaJHVrSlV0coGS1BKIa11wxWqEvkROd2cuwr-94RxrHoXLXadGdBPsSokV1yXEv4vuQAOCSf5dSunusemWgXXm_BS_f2lBL5tgYnWdG0wg3Xx3bFSFCXjycHG2eBjDNi-EQrVurAqFVatC6u2haWI-hCxbjSj88OYOuj-FfyyCTpEfLtHCpk446988qCw |
| CODEN | ITNNEP |
| CitedBy_id | crossref_primary_10_1007_s12541_015_0311_y crossref_primary_10_1016_j_mechmachtheory_2016_05_022 crossref_primary_10_1016_j_neucom_2013_08_005 crossref_primary_10_1111_mice_12010 crossref_primary_10_1016_j_apenergy_2020_115527 crossref_primary_10_1016_j_ymssp_2015_12_023 crossref_primary_10_1109_TCYB_2013_2240678 crossref_primary_10_1016_j_cmpb_2016_01_020 crossref_primary_10_1016_j_knosys_2021_107460 crossref_primary_10_1007_s11269_016_1532_2 crossref_primary_10_1016_j_energy_2017_09_026 crossref_primary_10_1016_j_knosys_2013_07_008 crossref_primary_10_1016_j_patcog_2012_02_038 crossref_primary_10_1109_TCIAIG_2017_2657690 crossref_primary_10_1109_TGRS_2018_2795641 crossref_primary_10_1080_07373937_2021_1950171 crossref_primary_10_1016_j_neucom_2015_09_031 crossref_primary_10_1109_TPWRS_2013_2269803 crossref_primary_10_1111_j_1468_0394_2012_00635_x crossref_primary_10_2196_20932 crossref_primary_10_1061__ASCE_CP_1943_5487_0000476 crossref_primary_10_3389_fimmu_2017_01771 crossref_primary_10_1016_j_asoc_2021_107178 crossref_primary_10_1016_j_neucom_2018_09_040 crossref_primary_10_1515_bams_2020_0047 crossref_primary_10_1007_s40999_021_00635_7 crossref_primary_10_1016_j_patcog_2013_01_008 crossref_primary_10_1016_j_neucom_2020_05_066 crossref_primary_10_1109_JSTARS_2014_2364394 crossref_primary_10_1016_j_bbe_2016_06_008 crossref_primary_10_1177_0967033518756175 crossref_primary_10_1016_j_ymssp_2016_12_030 crossref_primary_10_1016_j_neucom_2011_07_016 crossref_primary_10_7554_eLife_50936 crossref_primary_10_3390_s22239370 crossref_primary_10_1002_2016WR019933 crossref_primary_10_1038_s41598_021_89023_8 crossref_primary_10_1049_iet_com_2016_0335 crossref_primary_10_1109_TNNLS_2015_2431251 crossref_primary_10_1016_j_aeue_2014_05_005 crossref_primary_10_1109_TNN_2010_2094205 crossref_primary_10_1007_s00371_017_1447_9 |
| Cites_doi | 10.1016/S0377-2217(98)00114-3 10.1007/978-3-540-35488-8 10.1016/j.patcog.2006.04.041 10.1109/CEC.2004.1330830 10.1109/72.896793 10.1109/72.809079 10.1137/1116025 10.1016/j.cnsns.2008.12.011 10.1007/978-3-540-35488-8_11 10.1109/TPAMI.2005.159 10.1109/TNN.2007.911739 10.1162/jmlr.2003.4.6.1071 10.1109/TNN.2008.2000396 10.1109/TPAMI.2004.1273927 10.1007/978-1-4757-2440-0 10.1109/72.728361 10.1109/TPAMI.2005.33 10.1109/TITS.2002.802934 10.1109/IJCNN.2002.1007587 10.1109/TNN.2008.2010620 10.1016/0893-6080(90)90005-6 10.1109/ICONIP.2002.1202186 10.1023/A:1007618119488 |
| ContentType | Journal Article |
| Copyright | 2015 INIST-CNRS |
| Copyright_xml | – notice: 2015 INIST-CNRS |
| DBID | 97E RIA RIE AAYXX CITATION IQODW CGR CUY CVF ECM EIF NPM 7X8 7SC 7SP 8FD F28 FR3 JQ2 L7M L~C L~D |
| DOI | 10.1109/TNN.2010.2046423 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library (IEL) (UW System Shared) CrossRef Pascal-Francis Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database MEDLINE - Academic MEDLINE |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Anatomy & Physiology Computer Science Applied Sciences |
| EISSN | 1941-0093 |
| EndPage | 984 |
| ExternalDocumentID | 20409990 22847823 10_1109_TNN_2010_2046423 5451102 |
| Genre | orig-research Research Support, Non-U.S. Gov't Journal Article |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG ACGFS AETIX AGQYO AGSQL AHBIQ AI. AIBXA ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS S10 TAE TN5 VH1 AAYXX CITATION IQODW RIG AAYOK CGR CUY CVF ECM EIF NPM PKN Z5M 7X8 7SC 7SP 8FD F28 FR3 JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c408t-48dd5496d291c0201573bc99cf2c02beb47fe2bfa867f5de80666e1b9d36e68e3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 64 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000278537600010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1045-9227 1941-0093 |
| IngestDate | Fri Sep 05 03:11:14 EDT 2025 Thu Sep 04 23:50:59 EDT 2025 Wed Feb 19 01:53:19 EST 2025 Mon Jul 21 09:12:00 EDT 2025 Sat Nov 29 03:59:24 EST 2025 Tue Nov 18 22:00:29 EST 2025 Tue Aug 26 17:15:13 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Issue | 6 |
| Keywords | Adaptive algorithm multilayer perceptron (MLP) Gradient descent Backpropagation algorithm Space complexity Vector support machine Learning algorithm maximal-margin (MM) principle Mathematical programming Backpropagation Discriminant analysis Statistical analysis Minimization Pattern recognition Neural network Computational complexity Hyperplane Fisher information Constrained optimization Supervised learning Receiver operating characteristic curves Multilayer perceptrons Objective function Time complexity Artificial intelligence Information theory |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html CC BY 4.0 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c408t-48dd5496d291c0201573bc99cf2c02beb47fe2bfa867f5de80666e1b9d36e68e3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 ObjectType-Article-2 ObjectType-Feature-1 |
| PMID | 20409990 |
| PQID | 734030639 |
| PQPubID | 23479 |
| PageCount | 13 |
| ParticipantIDs | pascalfrancis_primary_22847823 crossref_primary_10_1109_TNN_2010_2046423 ieee_primary_5451102 proquest_miscellaneous_734030639 crossref_citationtrail_10_1109_TNN_2010_2046423 proquest_miscellaneous_753639850 pubmed_primary_20409990 |
| PublicationCentury | 2000 |
| PublicationDate | 2010-06-01 |
| PublicationDateYYYYMMDD | 2010-06-01 |
| PublicationDate_xml | – month: 06 year: 2010 text: 2010-06-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | New York, NY |
| PublicationPlace_xml | – name: New York, NY – name: United States |
| PublicationTitle | IEEE transactions on neural networks |
| PublicationTitleAbbrev | TNN |
| PublicationTitleAlternate | IEEE Trans Neural Netw |
| PublicationYear | 2010 |
| Publisher | IEEE Institute of Electrical and Electronics Engineers |
| Publisher_xml | – name: IEEE – name: Institute of Electrical and Electronics Engineers |
| References | ref13 ref12 ref14 ref10 ref2 ref17 ref19 ref18 ratsch (ref16) 2001; 42 ref24 ref23 ref26 ref25 ludwig (ref9) 2006 ref20 demuth (ref1) 2000 tsang (ref21) 2005; 6 ref22 ref27 ref8 platt (ref15) 1998 ref7 ref4 neal (ref11) 2006; 207 ref6 guyon (ref3) 2006 ref5 |
| References_xml | – ident: ref18 doi: 10.1016/S0377-2217(98)00114-3 – year: 2006 ident: ref3 publication-title: Feature Extraction Foundations and Applications doi: 10.1007/978-3-540-35488-8 – ident: ref13 doi: 10.1016/j.patcog.2006.04.041 – ident: ref25 doi: 10.1109/CEC.2004.1330830 – ident: ref17 doi: 10.1109/72.896793 – ident: ref20 doi: 10.1109/72.809079 – ident: ref23 doi: 10.1137/1116025 – ident: ref10 doi: 10.1016/j.cnsns.2008.12.011 – volume: 207 start-page: 265 year: 2006 ident: ref11 publication-title: Studies in Fuzziness and Soft Computing doi: 10.1007/978-3-540-35488-8_11 – ident: ref14 doi: 10.1109/TPAMI.2005.159 – ident: ref7 doi: 10.1109/TNN.2007.911739 – ident: ref19 doi: 10.1162/jmlr.2003.4.6.1071 – ident: ref5 doi: 10.1109/TNN.2008.2000396 – ident: ref8 doi: 10.1109/TPAMI.2004.1273927 – ident: ref22 doi: 10.1007/978-1-4757-2440-0 – ident: ref26 doi: 10.1109/72.728361 – volume: 6 start-page: 363 year: 2005 ident: ref21 article-title: core vector machines: fast svm training on very large data sets publication-title: J Mach Learn Res – ident: ref24 doi: 10.1109/TPAMI.2005.33 – start-page: 402 year: 2006 ident: ref9 article-title: optimization of ann applied to non-linear system identification publication-title: Proc 25th IASTED Int Conf Model Identif Control – ident: ref2 doi: 10.1109/TITS.2002.802934 – ident: ref6 doi: 10.1109/IJCNN.2002.1007587 – ident: ref27 doi: 10.1109/TNN.2008.2010620 – ident: ref4 doi: 10.1016/0893-6080(90)90005-6 – year: 2000 ident: ref1 publication-title: Neural Network Toolbox User's Guide for use with MATLAB – year: 1998 ident: ref15 publication-title: ?Sequential minimal optimization A fast algorithm for training support vector machines ? – ident: ref12 doi: 10.1109/ICONIP.2002.1202186 – volume: 42 start-page: 287 year: 2001 ident: ref16 publication-title: Machine Learning doi: 10.1023/A:1007618119488 |
| SSID | ssj0014506 |
| Score | 2.2600105 |
| Snippet | This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer... |
| SourceID | proquest pubmed pascalfrancis crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 972 |
| SubjectTerms | Algorithms Applied sciences Artificial intelligence Back propagation Backpropagation algorithms Complexity Computer science; control theory; systems Computer Simulation Connectionism. Neural networks Constraint optimization Data processing. List processing. Character string processing Exact sciences and technology Feedback Humans Hyperplanes Information Theory Interference Kernel Learning - physiology Mathematical models maximal-margin (MM) principle Memory organisation. Data processing multilayer perceptron (MLP) Multilayer perceptrons Neural networks Neural Networks (Computer) Optimization methods pattern recognition Pattern Recognition, Automated - methods ROC Curve Software supervised learning Support vector machines Testing Training |
| Title | Novel Maximum-Margin Training Algorithms for Supervised Neural Networks |
| URI | https://ieeexplore.ieee.org/document/5451102 https://www.ncbi.nlm.nih.gov/pubmed/20409990 https://www.proquest.com/docview/734030639 https://www.proquest.com/docview/753639850 |
| Volume | 21 |
| WOSCitedRecordID | wos000278537600010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0093 dateEnd: 20111231 omitProxy: false ssIdentifier: ssj0014506 issn: 1045-9227 databaseCode: RIE dateStart: 19900101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nj9MwEB0tKw5wYKHlo7ts5QNCQiI0dezEPlaIhQNESBSpt8hxJmylNlk1zQr-PR4nDazErsQtkT9k5dnxPM94HsCruBBocx8qNdeBSHgeqKJQgaPXvIilMQo90p-TNFWrlf56BG-HuzCI6IPP8B09el9-UduWjspmkpJpUebIe0kSd3e1Bo-BkF5H07ELGWjOk4NLMtSzZZp2MVyc_HicpHPcE5lG4Y3dyMurUHCkadz3KTthi9stT78DXZz839gfw6Pe0mSLbmo8gSOsRjBeVI5lb3-x18zHfvpD9RGcHMQdWL_WR_Dwr0yFY_iY1te4YV_Mz_W23QYkj7uu2LLXl2CLzY96t95fbhvmjGD2rb2iX1CDBaPkH24UaRdt3jyF7xcflu8_Bb0GQ2BFqPaBcMg5ChkXXM-tMy3nMolyq7UtuXvNMRdJiTwvjYqTUhaoiA_hPNdFFGOsMHoGx1Vd4Qtg2jj6Y2RexjoSlocq19wKIzBy3RoeT2B2wCKzfYJy0snYZJ6ohDpzQGYEZNYDOYE3Q4urLjnHHXXHBMpQr8djAtMbcA_lnPZtRe3YAf_MrTxyp5gK67bJkkgQ4Yr0HVVk5MqVDCfwvJs6f_rvZ-Dpv8d1Bg-6OAU673kJx_tdi-dw317v181u6hbASk39AvgN4FP-Qg |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bi9QwFD4sq6A-uDrjZbyseRBBsE4nTdrkcRDXFWeL4Aj7VtL0VAdm2mU6XfTfm5N2qgu64FtLLoR-SXO-nJPzAbyMC4E296FSMx2IhOeBKgoVOHrNi1gao9AjvUjSVJ2f688H8Ga4C4OIPvgM39Kj9-UXtW3pqGwqKZkWZY68IYXgYXdba_AZCOmVNB2_kIHmPNk7JUM9XaZpF8XFyZPHSTzHPZFxFF7Zj7zACoVHmsZ9obKTtvi37en3oJOj_xv9Pbjb25ps3k2O-3CA1QjG88rx7M1P9or56E9_rD6Co728A-tX-wju_JGrcAwf0voS1-zM_Fht2k1AArmrii17hQk2X3-rt6vd903DnBnMvrQX9BNqsGCU_sONIu3izZsH8PXk_fLdadCrMARWhGoXCIedI5FxwfXMOuNyJpMot1rbkrvXHHORlMjz0qg4KWWBihgRznJdRDHGCqOHcFjVFT4Gpo0jQEbmZawjYXmocs2tMAIj163h8QSmeywy26coJ6WMdeapSqgzB2RGQGY9kBN4PbS46NJzXFN3TKAM9Xo8JnB8Be6hnNPOragd2-OfubVHDhVTYd02WRIJolyRvqaKjFy5kuEEHnVT53f__Qx88vdxvYBbp8uzRbb4mH56Cre7qAU6_XkGh7tti8_hpr3crZrtsV8GvwA-QQCw |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Novel+Maximum-Margin+Training+Algorithms+for+Supervised+Neural+Networks&rft.jtitle=IEEE+transactions+on+neural+networks&rft.au=Ludwig%2C+Oswaldo&rft.au=Nunes%2C+Urbano&rft.date=2010-06-01&rft.pub=IEEE&rft.issn=1045-9227&rft.volume=21&rft.issue=6&rft.spage=972&rft.epage=984&rft_id=info:doi/10.1109%2FTNN.2010.2046423&rft_id=info%3Apmid%2F20409990&rft.externalDocID=5451102 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1045-9227&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1045-9227&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1045-9227&client=summon |