A comparative study of the scalability of a sensitivity-based learning algorithm for artificial neural networks
► Researchers must now study not only accuracy but also scalability. ► Researchers are investigating machine learning scalability to large scale problems. ► The scalability of popular training algorithms for ANNs is analyzed in this research. ► The training algorithm SBLLM performs better than other...
Saved in:
| Published in: | Expert systems with applications Vol. 40; no. 10; pp. 3900 - 3905 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Amsterdam
Elsevier Ltd
01.08.2013
Elsevier |
| Subjects: | |
| ISSN: | 0957-4174, 1873-6793 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | ► Researchers must now study not only accuracy but also scalability. ► Researchers are investigating machine learning scalability to large scale problems. ► The scalability of popular training algorithms for ANNs is analyzed in this research. ► The training algorithm SBLLM performs better than others in terms of scalability. ► This research contributes to the standardization of scalability studies.
Until recently, the most common criterion in machine learning for evaluating the performance of algorithms was accuracy. However, the unrestrainable growth of the volume of data in recent years in fields such as bioinformatics, intrusion detection or engineering, has raised new challenges in machine learning not simply regarding accuracy but also scalability. In this research, we are concerned with the scalability of one of the most well-known paradigms in machine learning, artificial neural networks (ANNs), particularly with the training algorithm Sensitivity-Based Linear Learning Method (SBLLM). SBLLM is a learning method for two-layer feedforward ANNs based on sensitivity analysis, that calculates the weights by solving a linear system of equations. The results show that the training algorithm SBLLM performs better in terms of scalability than five of the most popular and efficient training algorithms for ANNs. |
|---|---|
| AbstractList | Until recently, the most common criterion in machine learning for evaluating the performance of algorithms was accuracy. However, the unrestrainable growth of the volume of data in recent years in fields such as bioinformatics, intrusion detection or engineering, has raised new challenges in machine learning not simply regarding accuracy but also scalability. In this research, we are concerned with the scalability of one of the most well-known paradigms in machine learning, artificial neural networks (ANNs), particularly with the training algorithm Sensitivity-Based Linear Learning Method (SBLLM). SBLLM is a learning method for two-layer feedforward ANNs based on sensitivity analysis, that calculates the weights by solving a linear system of equations. The results show that the training algorithm SBLLM performs better in terms of scalability than five of the most popular and efficient training algorithms for ANNs. ► Researchers must now study not only accuracy but also scalability. ► Researchers are investigating machine learning scalability to large scale problems. ► The scalability of popular training algorithms for ANNs is analyzed in this research. ► The training algorithm SBLLM performs better than others in terms of scalability. ► This research contributes to the standardization of scalability studies. Until recently, the most common criterion in machine learning for evaluating the performance of algorithms was accuracy. However, the unrestrainable growth of the volume of data in recent years in fields such as bioinformatics, intrusion detection or engineering, has raised new challenges in machine learning not simply regarding accuracy but also scalability. In this research, we are concerned with the scalability of one of the most well-known paradigms in machine learning, artificial neural networks (ANNs), particularly with the training algorithm Sensitivity-Based Linear Learning Method (SBLLM). SBLLM is a learning method for two-layer feedforward ANNs based on sensitivity analysis, that calculates the weights by solving a linear system of equations. The results show that the training algorithm SBLLM performs better in terms of scalability than five of the most popular and efficient training algorithms for ANNs. |
| Author | Peteiro-Barral, Diego Fontenla-Romero, Oscar Pérez-Sánchez, Beatriz Guijarro-Berdiñas, Bertha |
| Author_xml | – sequence: 1 givenname: Diego surname: Peteiro-Barral fullname: Peteiro-Barral, Diego email: dpeteiro@udc.es – sequence: 2 givenname: Bertha surname: Guijarro-Berdiñas fullname: Guijarro-Berdiñas, Bertha – sequence: 3 givenname: Beatriz surname: Pérez-Sánchez fullname: Pérez-Sánchez, Beatriz – sequence: 4 givenname: Oscar surname: Fontenla-Romero fullname: Fontenla-Romero, Oscar |
| BackLink | http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=27179561$$DView record in Pascal Francis |
| BookMark | eNqNkUFrXCEUhaWk0EmaP5CVm0I3b6rP5_MJ3YTQJoVAN-1a7ug1cfpGp-ok5N_Xl0k3XYTChYuX7xzwnFNyElNEQi44W3PGx0_bNZZHWPeM9-s2TI1vyIpPSnSj0uKErJiWqhu4Gt6R01K2jHHFmFqRdElt2u0hQw0PSEs9uCeaPK337WFhhk2YQ30-AS0YS2hcO3QbKOjojJBjiHcU5ruUQ73fUZ8yhVyDDzbATCMe8vOqjyn_Ku_JWw9zwfOXfUZ-fv3y4-qmu_1-_e3q8razQo-181ozPlnvlNBODGoSSmwmLUDa0QlkPet7EKB6N0nBJI4SrdPSi95ZJpCLM_Lx6LvP6fcBSzW7UCzOM0RMh2La9zkbtdL_gYpBD1K28Br64QWFJRyfIdpQzD6HHeQn0yuutBwXy-nI2ZxKyeiNDbUlnGLNEGbDmVlqM1uz1GaW2kybVluT9v9I_7q_Kvp8FGGL9CFgNsUGjBZdyGircSm8Jv8DbGW0Mg |
| CitedBy_id | crossref_primary_10_1016_j_eswa_2013_08_038 crossref_primary_10_1016_j_eswa_2014_07_007 |
| Cites_doi | 10.7551/mitpress/7496.003.0016 10.1016/j.patcog.2009.11.024 10.1007/978-3-642-02478-8_20 10.1016/S0893-6080(05)80056-5 10.1007/BFb0067700 10.7551/mitpress/7496.003.0006 10.1007/978-3-642-14264-2_5 |
| ContentType | Journal Article |
| Copyright | 2012 Elsevier Ltd 2014 INIST-CNRS |
| Copyright_xml | – notice: 2012 Elsevier Ltd – notice: 2014 INIST-CNRS |
| DBID | AAYXX CITATION IQODW 7SC 8FD JQ2 L7M L~C L~D |
| DOI | 10.1016/j.eswa.2012.12.076 |
| DatabaseName | CrossRef Pascal-Francis Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Computer and Information Systems Abstracts Computer and Information Systems Abstracts |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science Applied Sciences |
| EISSN | 1873-6793 |
| EndPage | 3905 |
| ExternalDocumentID | 27179561 10_1016_j_eswa_2012_12_076 S0957417412013176 |
| GroupedDBID | --K --M .DC .~1 0R~ 13V 1B1 1RT 1~. 1~5 29G 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN 9JO AAAKF AAAKG AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AARIN AAXUO AAYFN ABBOA ABFNM ABKBG ABMAC ABMVD ABUCO ABXDB ABYKQ ACDAQ ACGFS ACHRH ACNNM ACNTT ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADTZH AEBSH AECPX AEKER AENEX AFKWA AFTJW AGHFR AGJBL AGUBO AGUMN AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALEQD ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD APLSM ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC BNSAS CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q GBLVA GBOLZ HAMUX HLZ HVGLF HZ~ IHE J1W JJJVA KOM LG9 LY1 LY7 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. PQQKQ Q38 R2- RIG ROL RPZ SBC SDF SDG SDP SDS SES SET SEW SPC SPCBC SSB SSD SSL SST SSV SSZ T5K TN5 WUQ XPP ZMT ~G- 9DU AATTM AAXKI AAYWO AAYXX ABJNI ABUFD ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS ~HD 08R AALMO AAPBV ABPIF ABPTK ADALY IPNFZ IQODW PQEST 7SC 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c396t-f99018cfd739d3478373b893a5c6d3e02022a3a72d85305e65ecd95f32dc03e13 |
| ISICitedReferencesCount | 3 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000317162900005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0957-4174 |
| IngestDate | Sun Sep 28 09:31:22 EDT 2025 Sun Nov 09 13:47:28 EST 2025 Fri Nov 25 01:07:34 EST 2022 Sat Nov 29 04:44:35 EST 2025 Tue Nov 18 19:58:27 EST 2025 Fri Feb 23 02:26:28 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 10 |
| Keywords | Algorithms Neural nets Classifier design and evaluation Machine learning Sensitivity analysis Scalability Intruder detector Neural network Modeling Optimization Learning (artificial intelligence) Classification Data field Feedforward Learning algorithm Bioinformatics Artificial intelligence Computer security Intrusion detection systems |
| Language | English |
| License | https://www.elsevier.com/tdm/userlicense/1.0 CC BY 4.0 |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c396t-f99018cfd739d3478373b893a5c6d3e02022a3a72d85305e65ecd95f32dc03e13 |
| Notes | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 ObjectType-Article-1 ObjectType-Feature-2 |
| PQID | 1349455417 |
| PQPubID | 23500 |
| PageCount | 6 |
| ParticipantIDs | proquest_miscellaneous_1701069791 proquest_miscellaneous_1349455417 pascalfrancis_primary_27179561 crossref_citationtrail_10_1016_j_eswa_2012_12_076 crossref_primary_10_1016_j_eswa_2012_12_076 elsevier_sciencedirect_doi_10_1016_j_eswa_2012_12_076 |
| PublicationCentury | 2000 |
| PublicationDate | 2013-08-01 |
| PublicationDateYYYYMMDD | 2013-08-01 |
| PublicationDate_xml | – month: 08 year: 2013 text: 2013-08-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | Amsterdam |
| PublicationPlace_xml | – name: Amsterdam |
| PublicationTitle | Expert systems with applications |
| PublicationYear | 2013 |
| Publisher | Elsevier Ltd Elsevier |
| Publisher_xml | – name: Elsevier Ltd – name: Elsevier |
| References | Ph.D. thesis. School of Computer Science, University of Technology, Sydney, Australia. Møller (b0060) 1993; 6 Bottou, L. (1991). Stochastic gradient learning in neural networks. In Fontenla-Romero, Guijarro-Berdiñas, Pérez-Sánchez, Alonso-Betanzos (b0050) 2010; 43 Sonnenburg, S., Franc, V., Yom-Tov, E., & Sebag, M. (2008). Pascal large scale learning challenge. ICML’08 Workshop, URL Weiss, Kulikowski (b0090) 1991 Collobert, Bengio (b0040) 2001; 1 Hecht-Nielsen (b0055) 1990 Bishop (b0010) 2006 Sonnenburg, Ratsch, Rieck (b0085) 2007 Dong (b0045) 2003 34. . Bengio, Y., & LeCun, Y. (2007). Scaling learning algorithms towards AI Boullé (b0025) 2009; 10 Pérez-Sánchez, Fontenla-Romero, Guijarro-Berdiñas (b0075) 2010 Castillo, Guijarro-Berdiñas, Fontenla-Romero, Alonso-Betanzos (b0030) 2006; 7 More (b0065) 1978 Bottou, Bousquet (b0020) 2008; 20 Catlett, J. (1991). Pérez-Sánchez, Fontenla-Romero, Guijarro-Berdiñas (b0070) 2009 Boullé (10.1016/j.eswa.2012.12.076_b0025) 2009; 10 10.1016/j.eswa.2012.12.076_b0035 10.1016/j.eswa.2012.12.076_b0005 10.1016/j.eswa.2012.12.076_b0015 Dong (10.1016/j.eswa.2012.12.076_b0045) 2003 Bottou (10.1016/j.eswa.2012.12.076_b0020) 2008; 20 More (10.1016/j.eswa.2012.12.076_b0065) 1978 Castillo (10.1016/j.eswa.2012.12.076_b0030) 2006; 7 Møller (10.1016/j.eswa.2012.12.076_b0060) 1993; 6 Weiss (10.1016/j.eswa.2012.12.076_b0090) 1991 Hecht-Nielsen (10.1016/j.eswa.2012.12.076_b0055) 1990 Bishop (10.1016/j.eswa.2012.12.076_b0010) 2006 Pérez-Sánchez (10.1016/j.eswa.2012.12.076_b0070) 2009 10.1016/j.eswa.2012.12.076_b0080 Sonnenburg (10.1016/j.eswa.2012.12.076_b0085) 2007 Fontenla-Romero (10.1016/j.eswa.2012.12.076_b0050) 2010; 43 Pérez-Sánchez (10.1016/j.eswa.2012.12.076_b0075) 2010 Collobert (10.1016/j.eswa.2012.12.076_b0040) 2001; 1 |
| References_xml | – start-page: 73 year: 2007 end-page: 104 ident: b0085 article-title: Large scale learning with string kernels publication-title: Large Scale Kernel Machines – volume: 20 start-page: 161 year: 2008 end-page: 168 ident: b0020 article-title: The tradeoffs of large scale learning publication-title: Advances in Neural Information Processing Systems – start-page: 42 year: 2010 end-page: 50 ident: b0075 article-title: An incremental learning method for neural networks based on sensitivity analysis publication-title: Current Topics in Artificial Intelligence – year: 2006 ident: b0010 article-title: Pattern recognition and machine learning – volume: 10 start-page: 1367 year: 2009 end-page: 1385 ident: b0025 article-title: A parameter-free classification method for large scale learning publication-title: Journal of Machine Learning Research – reference: . Ph.D. thesis. School of Computer Science, University of Technology, Sydney, Australia. – volume: 6 start-page: 525 year: 1993 end-page: 533 ident: b0060 article-title: A scaled conjugate gradient algorithm for fast supervised learning publication-title: Neural Networks – reference: , 34. – year: 1991 ident: b0090 article-title: Computer systems that learn: Classification and prediction methods from statistics, neural nets, machine learning, and expert systems – reference: . – volume: 7 start-page: 1159 year: 2006 end-page: 1182 ident: b0030 article-title: A very fast learning method for neural networks based on sensitivity analysis publication-title: Journal of Machine Learning Research – volume: 1 start-page: 160 year: 2001 ident: b0040 article-title: SVMTorch: Support vector machines for large-scale regression problems publication-title: Journal of Machine Learning Research – reference: Bottou, L. (1991). Stochastic gradient learning in neural networks. In – reference: Bengio, Y., & LeCun, Y. (2007). Scaling learning algorithms towards AI, – year: 1990 ident: b0055 article-title: Neurocomputing – start-page: 157 year: 2009 end-page: 164 ident: b0070 article-title: A supervised learning method for neural networks based on sensitivity analysis with automatic regularization publication-title: Bio-Inspired Systems: Computational and Ambient Intelligence – start-page: 105 year: 1978 end-page: 116 ident: b0065 article-title: The Levenberg–Marquardt algorithm: Implementation and theory publication-title: Numerical Analysis – reference: Sonnenburg, S., Franc, V., Yom-Tov, E., & Sebag, M. (2008). Pascal large scale learning challenge. ICML’08 Workshop, URL – reference: Catlett, J. (1991). – year: 2003 ident: b0045 article-title: Speed and accuracy: Large-scale machine learning algorithms and their applications – volume: 43 start-page: 1984 year: 2010 end-page: 1992 ident: b0050 article-title: A new convex objective function for the supervised learning of single-layer neural networks publication-title: Pattern Recognition – volume: 20 start-page: 161 year: 2008 ident: 10.1016/j.eswa.2012.12.076_b0020 article-title: The tradeoffs of large scale learning publication-title: Advances in Neural Information Processing Systems – year: 2006 ident: 10.1016/j.eswa.2012.12.076_b0010 – year: 1990 ident: 10.1016/j.eswa.2012.12.076_b0055 – ident: 10.1016/j.eswa.2012.12.076_b0005 doi: 10.7551/mitpress/7496.003.0016 – volume: 43 start-page: 1984 issue: 5 year: 2010 ident: 10.1016/j.eswa.2012.12.076_b0050 article-title: A new convex objective function for the supervised learning of single-layer neural networks publication-title: Pattern Recognition doi: 10.1016/j.patcog.2009.11.024 – volume: 1 start-page: 160 year: 2001 ident: 10.1016/j.eswa.2012.12.076_b0040 article-title: SVMTorch: Support vector machines for large-scale regression problems publication-title: Journal of Machine Learning Research – ident: 10.1016/j.eswa.2012.12.076_b0080 – start-page: 157 year: 2009 ident: 10.1016/j.eswa.2012.12.076_b0070 article-title: A supervised learning method for neural networks based on sensitivity analysis with automatic regularization publication-title: Bio-Inspired Systems: Computational and Ambient Intelligence doi: 10.1007/978-3-642-02478-8_20 – volume: 6 start-page: 525 issue: 4 year: 1993 ident: 10.1016/j.eswa.2012.12.076_b0060 article-title: A scaled conjugate gradient algorithm for fast supervised learning publication-title: Neural Networks doi: 10.1016/S0893-6080(05)80056-5 – volume: 7 start-page: 1159 year: 2006 ident: 10.1016/j.eswa.2012.12.076_b0030 article-title: A very fast learning method for neural networks based on sensitivity analysis publication-title: Journal of Machine Learning Research – ident: 10.1016/j.eswa.2012.12.076_b0015 – start-page: 105 year: 1978 ident: 10.1016/j.eswa.2012.12.076_b0065 article-title: The Levenberg–Marquardt algorithm: Implementation and theory publication-title: Numerical Analysis doi: 10.1007/BFb0067700 – volume: 10 start-page: 1367 year: 2009 ident: 10.1016/j.eswa.2012.12.076_b0025 article-title: A parameter-free classification method for large scale learning publication-title: Journal of Machine Learning Research – year: 1991 ident: 10.1016/j.eswa.2012.12.076_b0090 – start-page: 73 year: 2007 ident: 10.1016/j.eswa.2012.12.076_b0085 article-title: Large scale learning with string kernels publication-title: Large Scale Kernel Machines doi: 10.7551/mitpress/7496.003.0006 – ident: 10.1016/j.eswa.2012.12.076_b0035 – start-page: 42 year: 2010 ident: 10.1016/j.eswa.2012.12.076_b0075 article-title: An incremental learning method for neural networks based on sensitivity analysis publication-title: Current Topics in Artificial Intelligence doi: 10.1007/978-3-642-14264-2_5 – year: 2003 ident: 10.1016/j.eswa.2012.12.076_b0045 |
| SSID | ssj0017007 |
| Score | 2.0741863 |
| Snippet | ► Researchers must now study not only accuracy but also scalability. ► Researchers are investigating machine learning scalability to large scale problems. ►... Until recently, the most common criterion in machine learning for evaluating the performance of algorithms was accuracy. However, the unrestrainable growth of... |
| SourceID | proquest pascalfrancis crossref elsevier |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 3900 |
| SubjectTerms | Algorithms Applied sciences Artificial intelligence Artificial neural networks Biological and medical sciences Classifier design and evaluation Computer science; control theory; systems Computer systems and distributed systems. User interface Connectionism. Neural networks Exact sciences and technology Fundamental and applied biological sciences. Psychology General aspects Learning Linear systems Machine learning Mathematical analysis Mathematics in biology. Statistical analysis. Models. Metrology. Data processing in biology (general aspects) Memory and file management (including protection and security) Memory organisation. Data processing Neural nets Sensitivity analysis Software Training |
| Title | A comparative study of the scalability of a sensitivity-based learning algorithm for artificial neural networks |
| URI | https://dx.doi.org/10.1016/j.eswa.2012.12.076 https://www.proquest.com/docview/1349455417 https://www.proquest.com/docview/1701069791 |
| Volume | 40 |
| WOSCitedRecordID | wos000317162900005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1873-6793 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017007 issn: 0957-4174 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1bb9MwFLZKxwMS4o4ol8lIiJfJKInjOn7sUCdAVYegQ32L3MTpRV1SknSM_Tn-GsdxnHZsVOMBqUrTqHHafl99To7P-Q5Cb4SIuMfUhESJcIjPqCKSMUUCSbvgz064X8Uhvw34cBiMx-Jzq_XL1sKcLXmaBufnYvVfoYZjALYunf0HuJtB4QDsA-iwBdhheyPge3VeuVH0LqxotHYwCwDE6HL_NHWRhU5fN_0jiLZnse0iMT2Qy2mWz8vZqcmzzKucIh1e1wqY1VOVP15cCu1r3eSyVoe2dXNbK-SbebhU8zwjhzLPq4YDMPOqadbkAq3nC60NSQ4V0Dc1JWewX87kZoRcXZCvQNmZCYEfKt1r4GJjVHVq_lKSL9mpMqU8x_Dt8-0gh244Edggh41WcuK7pqGPnbiNzpMlqLM1DVPhOFsmHV6ya82FiVws3qnih9agcr0qNMyv0eYeHodHJ4NBOOqPR29X34luW6aX9-seLrfQnseZCNpor_exP_7ULGRxx1Ts289f122ZFMM_L_s33-juSmqCJKbVyhWvoXKFRg_QvfoeBvcM9x6ilkofofu2PwiuzcVjlPXwFhVxRUWcJRioiLeoqA9JfIWK2FIRN1TEQEW8oSI2VMSWik_QyVF_9P4DqRt8kIiKbkkSvSgbREnMqYipzwPK6QQcaMmibkwV3Ml4nqSSezE4lQ5TXaaiWLCEenHkUOXSp6idZql6hnDsqngCxgqMZuC7cSxZQCcigCFcP5Ex6yDX_rRhVKvf6yYsy9CmOS5CDUeo4QjhAXB00EFzzspov-x8N7OIhbX3arzSENi287z9S_A2l_I4GEu4vemg1xbvEOZ-vaAnU5Wti1BLi_pwP-DyHe_hOuojuHCf32CcF-jO5u_3ErXLfK1eodvRWTkv8v2a3L8Be3LkrA |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+comparative+study+of+the+scalability+of+a+sensitivity-based+learning+algorithm+for+artificial+neural+networks&rft.jtitle=Expert+systems+with+applications&rft.au=Peteiro-Barral%2C+Diego&rft.au=Guijarro-Berdinas%2C+Bertha&rft.au=Perez-Sanchez%2C+Beatriz&rft.au=Fontenla-Romero%2C+Oscar&rft.date=2013-08-01&rft.issn=0957-4174&rft.volume=40&rft.issue=10&rft.spage=3900&rft.epage=3905&rft_id=info:doi/10.1016%2Fj.eswa.2012.12.076&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0957-4174&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0957-4174&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0957-4174&client=summon |