An ensemble of differential evolution and Adam for training feed-forward neural networks
Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to...
Saved in:
| Published in: | Information sciences Vol. 608; pp. 453 - 471 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier Inc
01.08.2022
|
| Subjects: | |
| ISSN: | 0020-0255, 1872-6291 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed. |
|---|---|
| AbstractList | Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed. |
| Author | Neri, Ferrante Tong, Yiling Xue, Yu |
| Author_xml | – sequence: 1 givenname: Yu surname: Xue fullname: Xue, Yu email: xueyu@nuist.edu.cn organization: School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, Jiangsu, China – sequence: 2 givenname: Yiling surname: Tong fullname: Tong, Yiling email: 20201220036@nuist.edu.cn organization: School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, Jiangsu, China – sequence: 3 givenname: Ferrante surname: Neri fullname: Neri, Ferrante email: f.neri@surrey.ac.uk organization: NICE Research Group, Department of Computer Science, University of Surrey, Guildford, UK |
| BookMark | eNp9kM1KAzEURoNUsK0-gLu8wIxJZiYzwVUp_kHBjYK7kEluJHWaSDJt8e3NUFcuurpwuefju2eBZj54QOiWkpISyu-2pfOpZISxkvCSVPwCzWnXsoIzQWdoTggjBWFNc4UWKW0JIXXL-Rx9rDwGn2DXD4CDxcZZCxH86NSA4RCG_eiCx8obvDJqh22IeIzKeec_sQUwRd4cVTTYwz5mxsN4DPErXaNLq4YEN39zid4fH97Wz8Xm9ellvdoUmol2LEzfVU1LbaUEZ6w3CgQA7bue9g1VVIPpG9ZpxiuhoeONqLWoDW_zcV0JXldL1J5ydQwpRbBSu1FNpaeag6REToLkVmZBchIkCZdZUCbpP_I7up2KP2eZ-xMD-aWDgyiTduBzTRdBj9IEd4b-BaCPgX0 |
| CitedBy_id | crossref_primary_10_1007_s00521_023_08365_8 crossref_primary_10_1016_j_engappai_2023_106008 crossref_primary_10_1016_j_ress_2024_110481 crossref_primary_10_1007_s11227_023_05390_1 crossref_primary_10_1007_s40571_025_00928_x crossref_primary_10_1109_TNNLS_2025_3564654 crossref_primary_10_1016_j_engappai_2023_107579 crossref_primary_10_1007_s12530_023_09518_9 crossref_primary_10_1016_j_jwpe_2025_107784 crossref_primary_10_1057_s41599_025_04633_1 crossref_primary_10_1016_j_compbiomed_2023_107166 crossref_primary_10_1016_j_eswa_2025_127374 crossref_primary_10_1016_j_ins_2024_120124 crossref_primary_10_3390_computation11080166 crossref_primary_10_7717_peerj_cs_3178 crossref_primary_10_1016_j_optlaseng_2024_108561 crossref_primary_10_1016_j_csite_2024_105079 crossref_primary_10_1016_j_ipm_2023_103521 crossref_primary_10_1016_j_iswa_2022_200173 crossref_primary_10_3390_app14219972 crossref_primary_10_1016_j_eja_2025_127599 crossref_primary_10_3390_bioengineering10070850 crossref_primary_10_1109_ACCESS_2024_3445269 crossref_primary_10_1109_ACCESS_2023_3312010 crossref_primary_10_1016_j_cnsns_2023_107144 crossref_primary_10_1016_j_atech_2025_101266 crossref_primary_10_1007_s11063_022_11055_6 crossref_primary_10_1016_j_foodchem_2025_144197 crossref_primary_10_1007_s11071_024_09605_9 crossref_primary_10_1002_rnc_6741 crossref_primary_10_1007_s00397_023_01425_9 crossref_primary_10_1016_j_engappai_2023_106504 crossref_primary_10_1007_s10586_024_04915_4 crossref_primary_10_1016_j_compbiomed_2024_108440 crossref_primary_10_1007_s10489_023_04595_4 crossref_primary_10_1016_j_bspc_2023_104893 crossref_primary_10_1109_ACCESS_2024_3355312 crossref_primary_10_1088_2515_7620_ad484b crossref_primary_10_1016_j_neucom_2023_126899 crossref_primary_10_1016_j_emcon_2025_100558 crossref_primary_10_1109_TFUZZ_2025_3574947 crossref_primary_10_1109_ACCESS_2023_3321687 crossref_primary_10_1109_TPEL_2025_3544580 crossref_primary_10_1016_j_ins_2023_03_115 crossref_primary_10_1007_s10586_023_04173_w crossref_primary_10_1007_s42979_025_04077_z crossref_primary_10_1093_jcde_qwad093 crossref_primary_10_3390_biomimetics8030268 crossref_primary_10_1016_j_yofte_2025_104256 crossref_primary_10_1016_j_displa_2023_102485 crossref_primary_10_1016_j_compbiomed_2023_106949 crossref_primary_10_1016_j_compbiomed_2023_106948 crossref_primary_10_1016_j_ins_2024_120192 crossref_primary_10_1088_2631_8695_adebe1 crossref_primary_10_1016_j_bspc_2023_105208 crossref_primary_10_1109_JSEN_2025_3556912 crossref_primary_10_1038_s41598_025_02846_7 crossref_primary_10_1016_j_neunet_2025_107751 crossref_primary_10_1016_j_ins_2022_10_006 crossref_primary_10_1016_j_compbiomed_2023_107197 crossref_primary_10_1016_j_fuel_2024_133096 crossref_primary_10_1016_j_ins_2025_122039 crossref_primary_10_3390_ai4040054 crossref_primary_10_1016_j_engappai_2022_105755 crossref_primary_10_3390_biomimetics8030278 crossref_primary_10_1111_bcpt_13959 crossref_primary_10_3788_IRLA20240606 crossref_primary_10_1007_s11227_024_06799_y crossref_primary_10_1016_j_compbiomed_2023_106950 crossref_primary_10_1109_ACCESS_2023_3236836 crossref_primary_10_3389_fninf_2023_1126783 crossref_primary_10_1016_j_ins_2024_120167 crossref_primary_10_1016_j_petsci_2023_08_032 crossref_primary_10_3390_biomimetics8060484 crossref_primary_10_1016_j_heliyon_2023_e18832 crossref_primary_10_1088_1361_6595_add95c crossref_primary_10_1016_j_swevo_2025_102125 crossref_primary_10_1093_jcde_qwad073 crossref_primary_10_3390_sym15051120 crossref_primary_10_1088_1361_6595_ad98bf crossref_primary_10_1016_j_isci_2023_107736 crossref_primary_10_1016_j_eswa_2023_119562 crossref_primary_10_3390_buildings14103299 crossref_primary_10_1016_j_swevo_2024_101829 crossref_primary_10_1016_j_fuel_2025_135270 crossref_primary_10_1016_j_engappai_2024_108188 crossref_primary_10_1016_j_neucom_2024_127664 crossref_primary_10_1016_j_ijhydene_2024_08_465 crossref_primary_10_1038_s42256_024_00849_z crossref_primary_10_1109_JLT_2023_3348834 crossref_primary_10_1007_s13349_024_00821_w crossref_primary_10_1038_s41598_023_35663_x crossref_primary_10_1007_s00521_024_10781_3 crossref_primary_10_1016_j_bspc_2023_105423 crossref_primary_10_1371_journal_pone_0289691 crossref_primary_10_1007_s10489_025_06609_9 crossref_primary_10_1007_s42235_023_00367_5 crossref_primary_10_1016_j_neucom_2023_126467 crossref_primary_10_1007_s11831_023_10058_3 crossref_primary_10_3390_math11071675 crossref_primary_10_1007_s42235_023_00400_7 crossref_primary_10_1007_s00521_023_08287_5 crossref_primary_10_1109_ACCESS_2023_3300229 crossref_primary_10_1016_j_swevo_2024_101499 crossref_primary_10_1109_JAS_2025_125237 crossref_primary_10_1371_journal_pone_0291626 crossref_primary_10_1155_cplx_8823662 crossref_primary_10_7717_peerj_cs_2813 crossref_primary_10_1007_s11356_023_28777_2 crossref_primary_10_1016_j_asoc_2023_110782 crossref_primary_10_1016_j_asoc_2023_110664 crossref_primary_10_1016_j_ins_2024_121363 crossref_primary_10_1016_j_neucom_2024_128374 crossref_primary_10_32604_csse_2023_038912 crossref_primary_10_1002_eng2_70344 crossref_primary_10_1016_j_knosys_2024_111784 crossref_primary_10_1016_j_ipm_2023_103304 crossref_primary_10_1016_j_swevo_2023_101466 |
| Cites_doi | 10.1016/j.neunet.2021.02.011 10.1016/j.engappai.2017.01.013 10.1016/j.ins.2016.07.009 10.1007/s00521-020-04749-2 10.1016/j.advengsoft.2003.12.001 10.1016/j.ins.2020.09.041 10.1109/ICCV.2005.239 10.1145/3340848 10.1016/0925-2312(93)90006-O 10.1007/s10489-014-0645-7 10.1007/s00500-017-2547-1 10.1109/TEVC.2009.2039139 10.1007/s00521-017-2952-5 10.1007/s10462-009-9137-2 10.1007/s00521-020-05131-y 10.1007/s00500-009-0510-5 10.1016/j.neunet.2021.01.026 10.1016/j.asoc.2012.05.032 10.1137/130943170 10.1109/CVPR.2017.243 10.1109/TEVC.2010.2040183 10.1016/j.jup.2021.101294 10.1016/j.neunet.2021.10.004 10.1016/j.asoc.2014.01.038 10.1016/j.asoc.2010.05.007 10.1016/j.ins.2013.06.011 10.1007/s10489-019-01613-2 10.1109/ICNN.1995.488968 10.3233/ICA-150481 10.1016/j.ins.2016.12.028 10.1016/j.neucom.2013.12.062 10.1007/978-3-662-45523-4_50 10.1016/j.ins.2016.09.016 10.1109/CEC.2000.870802 10.1145/3449726.3463132 |
| ContentType | Journal Article |
| Copyright | 2022 |
| Copyright_xml | – notice: 2022 |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.ins.2022.06.036 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Library & Information Science |
| EISSN | 1872-6291 |
| EndPage | 471 |
| ExternalDocumentID | 10_1016_j_ins_2022_06_036 S0020025522006260 |
| GroupedDBID | --K --M --Z -~X .DC .~1 0R~ 1B1 1OL 1RT 1~. 1~5 29I 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN 9JO AAAKF AAAKG AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AARIN AAXUO AAYFN ABAOU ABBOA ABEFU ABFNM ABJNI ABMAC ABTAH ABUCO ABXDB ABYKQ ACAZW ACDAQ ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADGUI ADJOM ADMUD ADTZH AEBSH AECPX AEKER AENEX AFFNX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIGVJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD APLSM ARUGR ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HAMUX HLZ HVGLF HZ~ H~9 IHE J1W JJJVA KOM LG9 LY1 M41 MHUIS MO0 MS~ N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SBC SDF SDG SDP SDS SES SEW SPC SPCBC SSB SSD SST SSV SSW SSZ T5K TN5 TWZ UHS WH7 WUQ XPP YYP ZMT ZY4 ~02 ~G- 77I 9DU AATTM AAXKI AAYWO AAYXX ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO ADVLN AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS ~HD |
| ID | FETCH-LOGICAL-c297t-db83571f3a9622bdae9ee1b8b1b51a1cedb528c2639ce86594c94d67622439643 |
| ISICitedReferencesCount | 123 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000864028900004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0020-0255 |
| IngestDate | Tue Nov 18 22:11:33 EST 2025 Sat Nov 29 06:59:40 EST 2025 Fri Feb 23 02:38:39 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Ensemble algorithms Gradient-based algorithms Differential algorithms Feed-forward neural networks Multi-populations |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c297t-db83571f3a9622bdae9ee1b8b1b51a1cedb528c2639ce86594c94d67622439643 |
| PageCount | 19 |
| ParticipantIDs | crossref_citationtrail_10_1016_j_ins_2022_06_036 crossref_primary_10_1016_j_ins_2022_06_036 elsevier_sciencedirect_doi_10_1016_j_ins_2022_06_036 |
| PublicationCentury | 2000 |
| PublicationDate | August 2022 2022-08-00 |
| PublicationDateYYYYMMDD | 2022-08-01 |
| PublicationDate_xml | – month: 08 year: 2022 text: August 2022 |
| PublicationDecade | 2020 |
| PublicationTitle | Information sciences |
| PublicationYear | 2022 |
| Publisher | Elsevier Inc |
| Publisher_xml | – name: Elsevier Inc |
| References | Bottou (b0030) 1998 Neri (b0140) 2019 Ai, Chen, Xie (b0005) 2016; 373 Tieleman, Hinton (b0180) 2017 Xu, Zhang, Zhang, Mandic (b0205) 2021; 139 Peng, Tang, Chen, Yao (b0155) 2010; 14 A. Choromanska, Y. LeCun, G. Ben Arous, Open problem: the landscape of the loss surfaces of multilayer networks, in: P. Grünwald, E. Hazan, S. Kale (Eds.), Proceedings of The 28th Conference on Learning Theory, Vol. 40 of Proceedings of Machine Learning Research, PMLR, Paris, France, 2015, pp. 1756–1760. Zhao, Feng, Lin, Wei, Wang, Xiao, Cao, Hou (b0250) 2015; 149 Yuan, Ling, Yin (b0235) 2016; 26 Prügel-Bennett (b0165) 2010; 14 Fan, Yu, Dong, Yeh, Hong (b0060) 2021; 73 Zhang, Zhang, Lok, Lyu (b0245) 2007; 185 D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, in: The International Conference on Learning Representations, 2015, p. 13. A.S. Fukunaga, Genetic algorithm portfolios, in: Proceedings of the 2000 Congress on Evolutionary Computation, vol. 2, IEEE, 2000, pp. 1304–1311. Iiduka (b0090) 2021 Langdon, Botvinick, Nakahara, Tanaka, Matsumoto, Kanai (b0105) 2022; 145 G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2261–2269. K. Grauman, T. Darrell, The pyramid match kernel: discriminative classification with sets of image features, in: Tenth IEEE International Conference on Computer Vision, vol. 2, 2005, pp. 1458–1465. Ma, Bai (b0125) 2020; 50 Piotrowski, Napiorkowski, Napiorkowski, Rowinski (b0160) 2017; 384 Liao (b0120) 2010; 10 Tirumala (b0185) 2020; 32 Amari (b0010) 1993; 5 Weber, Tirronen, Neri (b0200) 2010; 14 S. Ruder, An overview of gradient descent optimization algorithms, arXiv preprint arXiv:1609.04747, 2016. J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of ICNN’95-international Conference on Neural Networks, vol. 4, 1995, pp. 1942–1948. Xue, Zhong, Zhuang, Xu (b0225) 2014; 231 Cheng, Zhang, Caraffini, Neri (b0040) 2015; 22 Yang, Tian, He, Zhang, Tan, Jin (b0230) 2021 Xue, Xue, Zhang (b0220) 2019; 13 Cheng, Yen, Zhang (b0035) 2016; 367 Cheng, Zhang, Neri (b0045) 2013; 247 Schmidt, Thierauf (b0175) 2005; 36 García-Ródenas, Linares, López-Gómez (b0070) 2021; 33 H. Li, Z. Xu, G. Taylor, C. Studer, T. Goldstein, Visualizing the loss landscape of neural nets, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18. Curran Associates Inc., Red Hook, NY, USA, 2018b, pp. 6391–6401. G. Iacca, F. Neri, F. Caraffini, P.N. Suganthan, A differential evolution framework with ensemble of parameters and strategies and pool of local search algorithms, in: A.I. Esparcia-Alcázar, A.M. Mora (Eds.), Applications of Evolutionary Computation - 17th European Conference, EvoApplications 2014, Granada, Spain, April 23–25, 2014, vol. 8602 of Lecture Notes in Computer Science, Springer, 2014, pp. 615–626. Ojha, Abraham, Snášel (b0150) 2017; 60 Varela-Santos, Melin (b0190) 2021; 545 Wang, Li, Huang, Li (b0195) 2014; 18 H. Li, Z. Xu, G. Taylor, C. Studer, T. Goldstein, Visualizing the loss landscape of neural nets, Advances in Neural Information Processing Systems 31 (2018a). Xue, Jiang, Zhao, Ma (b0210) 2018; 22 Mirjalili (b0130) 2015; 43 J. Moses, K.M. Malan, A.S. Bosman, Analysing the loss landscape features of generative adversarial networks, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021, pp. 1692–1699. Xue, Wang, Liang (b0215) 2022 Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B., 2011. Algorithms for hyper-parameter optimization. In: Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K.Q. (Eds.), 25th Annual Conference on Neural Information Processing Systems. Vol. 24. Curran Associates, Inc., pp. 2546–2554. Apicella, Donnarumma, Isgrò, Prevete (b0020) 2021 Neri, Tirronen (b0145) 2010; 33 Zhang, Cheng, Gheorghe, Meng (b0240) 2013; 13 Amirsadri, Mousavirad, Ebrahimpour-Komleh (b0015) 2018; 30 D. Dua, C. Graff, UCI machine learning repository. Tech. rep., University of California, Irvine, School of Information and Computer Sciences, 2017. 10.1016/j.ins.2022.06.036_b0115 Neri (10.1016/j.ins.2022.06.036_b0140) 2019 Cheng (10.1016/j.ins.2022.06.036_b0035) 2016; 367 Fan (10.1016/j.ins.2022.06.036_b0060) 2021; 73 Xue (10.1016/j.ins.2022.06.036_b0215) 2022 10.1016/j.ins.2022.06.036_b0080 Xue (10.1016/j.ins.2022.06.036_b0210) 2018; 22 Xue (10.1016/j.ins.2022.06.036_b0220) 2019; 13 Zhao (10.1016/j.ins.2022.06.036_b0250) 2015; 149 10.1016/j.ins.2022.06.036_b0085 Zhang (10.1016/j.ins.2022.06.036_b0245) 2007; 185 Yang (10.1016/j.ins.2022.06.036_b0230) 2021 Bottou (10.1016/j.ins.2022.06.036_b0030) 1998 Iiduka (10.1016/j.ins.2022.06.036_b0090) 2021 Mirjalili (10.1016/j.ins.2022.06.036_b0130) 2015; 43 Prügel-Bennett (10.1016/j.ins.2022.06.036_b0165) 2010; 14 Cheng (10.1016/j.ins.2022.06.036_b0040) 2015; 22 10.1016/j.ins.2022.06.036_b0170 10.1016/j.ins.2022.06.036_b0050 10.1016/j.ins.2022.06.036_b0095 Langdon (10.1016/j.ins.2022.06.036_b0105) 2022; 145 Schmidt (10.1016/j.ins.2022.06.036_b0175) 2005; 36 10.1016/j.ins.2022.06.036_b0055 10.1016/j.ins.2022.06.036_b0135 Ojha (10.1016/j.ins.2022.06.036_b0150) 2017; 60 Xue (10.1016/j.ins.2022.06.036_b0225) 2014; 231 Piotrowski (10.1016/j.ins.2022.06.036_b0160) 2017; 384 Cheng (10.1016/j.ins.2022.06.036_b0045) 2013; 247 Peng (10.1016/j.ins.2022.06.036_b0155) 2010; 14 Neri (10.1016/j.ins.2022.06.036_b0145) 2010; 33 Tieleman (10.1016/j.ins.2022.06.036_b0180) 2017 Xu (10.1016/j.ins.2022.06.036_b0205) 2021; 139 10.1016/j.ins.2022.06.036_b0065 Ma (10.1016/j.ins.2022.06.036_b0125) 2020; 50 10.1016/j.ins.2022.06.036_b0100 Tirumala (10.1016/j.ins.2022.06.036_b0185) 2020; 32 Apicella (10.1016/j.ins.2022.06.036_b0020) 2021 10.1016/j.ins.2022.06.036_b0025 Weber (10.1016/j.ins.2022.06.036_b0200) 2010; 14 García-Ródenas (10.1016/j.ins.2022.06.036_b0070) 2021; 33 Liao (10.1016/j.ins.2022.06.036_b0120) 2010; 10 Amari (10.1016/j.ins.2022.06.036_b0010) 1993; 5 Amirsadri (10.1016/j.ins.2022.06.036_b0015) 2018; 30 Wang (10.1016/j.ins.2022.06.036_b0195) 2014; 18 Yuan (10.1016/j.ins.2022.06.036_b0235) 2016; 26 Ai (10.1016/j.ins.2022.06.036_b0005) 2016; 373 Varela-Santos (10.1016/j.ins.2022.06.036_b0190) 2021; 545 10.1016/j.ins.2022.06.036_b0075 10.1016/j.ins.2022.06.036_b0110 Zhang (10.1016/j.ins.2022.06.036_b0240) 2013; 13 |
| References_xml | – volume: 13 start-page: 1528 year: 2013 end-page: 1542 ident: b0240 article-title: A hybrid approach based on differential evolution and tissue membrane systems for solving constrained manufacturing parameter optimization problems publication-title: Appl. Soft Comput. – reference: K. Grauman, T. Darrell, The pyramid match kernel: discriminative classification with sets of image features, in: Tenth IEEE International Conference on Computer Vision, vol. 2, 2005, pp. 1458–1465. – volume: 13 start-page: 1 year: 2019 end-page: 27 ident: b0220 article-title: Self-adaptive particle swarm optimization for large-scale feature selection in classification publication-title: ACM Trans. Knowl. Discov. Data – volume: 367 start-page: 890 year: 2016 end-page: 908 ident: b0035 article-title: A grid-based adaptive multi-objective differential evolution algorithm publication-title: Inf. Sci. – volume: 545 start-page: 403 year: 2021 end-page: 414 ident: b0190 article-title: A new approach for classifying coronavirus covid-19 based on its manifestation on chest x-rays using texture features and neural networks publication-title: Inf. Sci. – reference: G. Iacca, F. Neri, F. Caraffini, P.N. Suganthan, A differential evolution framework with ensemble of parameters and strategies and pool of local search algorithms, in: A.I. Esparcia-Alcázar, A.M. Mora (Eds.), Applications of Evolutionary Computation - 17th European Conference, EvoApplications 2014, Granada, Spain, April 23–25, 2014, vol. 8602 of Lecture Notes in Computer Science, Springer, 2014, pp. 615–626. – volume: 32 start-page: 13051 year: 2020 end-page: 13064 ident: b0185 article-title: Evolving deep neural networks using coevolutionary algorithms with multi-population strategy publication-title: Neural Comput. Appl. – year: 2022 ident: b0215 article-title: A self-adaptive gradient descent search algorithm for fully-connected neural networks publication-title: Neurocomputing – year: 2019 ident: b0140 article-title: Linear algebra for computational sciences and engineering – reference: J. Moses, K.M. Malan, A.S. Bosman, Analysing the loss landscape features of generative adversarial networks, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021, pp. 1692–1699. – reference: H. Li, Z. Xu, G. Taylor, C. Studer, T. Goldstein, Visualizing the loss landscape of neural nets, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18. Curran Associates Inc., Red Hook, NY, USA, 2018b, pp. 6391–6401. – start-page: 1 year: 2021 end-page: 12 ident: b0090 article-title: Appropriate learning rates of adaptive learning rate optimization algorithms for training deep neural networks publication-title: IEEE Trans. Cybern. – volume: 43 start-page: 150 year: 2015 end-page: 161 ident: b0130 article-title: How effective is the grey wolf optimizer in training multi-layer perceptrons publication-title: Appl. Intell. – volume: 50 start-page: 1510 year: 2020 end-page: 1526 ident: b0125 article-title: A multi-population differential evolution with best-random mutation strategy for large-scale global optimization publication-title: Appl. Intell. – volume: 26 start-page: 1835 year: 2016 end-page: 1854 ident: b0235 article-title: On the convergence of decentralized gadient descent publication-title: Siam J. Optim. – start-page: 9 year: 1998 end-page: 42 ident: b0030 article-title: Online algorithms and stochastic approximations publication-title: Online Learning and Neural Networks – reference: A. Choromanska, Y. LeCun, G. Ben Arous, Open problem: the landscape of the loss surfaces of multilayer networks, in: P. Grünwald, E. Hazan, S. Kale (Eds.), Proceedings of The 28th Conference on Learning Theory, Vol. 40 of Proceedings of Machine Learning Research, PMLR, Paris, France, 2015, pp. 1756–1760. – volume: 73 year: 2021 ident: b0060 article-title: Forecasting short-term electricity load using hybrid support vector regression with grey catastrophe and random forest modeling publication-title: Util. Policy – volume: 30 start-page: 3707 year: 2018 end-page: 3720 ident: b0015 article-title: A levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training publication-title: Neural Comput. Appl. – reference: A.S. Fukunaga, Genetic algorithm portfolios, in: Proceedings of the 2000 Congress on Evolutionary Computation, vol. 2, IEEE, 2000, pp. 1304–1311. – volume: 18 start-page: 232 year: 2014 end-page: 247 ident: b0195 article-title: Differential evolution based on covariance matrix learning and bimodal distribution parameter setting publication-title: Appl. Soft Comput. – volume: 14 start-page: 782 year: 2010 end-page: 800 ident: b0155 article-title: Population-based algorithm portfolios for numerical optimization publication-title: IEEE Trans. Evol. Comput. – reference: Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B., 2011. Algorithms for hyper-parameter optimization. In: Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K.Q. (Eds.), 25th Annual Conference on Neural Information Processing Systems. Vol. 24. Curran Associates, Inc., pp. 2546–2554. – volume: 231 start-page: 329 year: 2014 end-page: 346 ident: b0225 article-title: An ensemble algorithm with self-adaptive learning techniques for high-dimensional numerical optimization publication-title: Appl. Math. Comput. – volume: 384 start-page: 34 year: 2017 end-page: 85 ident: b0160 article-title: Swarm intelligence and evolutionary algorithms: performance versus speed publication-title: Inf. Sci. – volume: 33 start-page: 2561 year: 2021 end-page: 2588 ident: b0070 article-title: Memetic algorithms for training feedforward neural networks: an approach based on gravitational search algorithm publication-title: Neural Comput. Appl. – reference: H. Li, Z. Xu, G. Taylor, C. Studer, T. Goldstein, Visualizing the loss landscape of neural nets, Advances in Neural Information Processing Systems 31 (2018a). – volume: 36 start-page: 11 year: 2005 end-page: 19 ident: b0175 article-title: A combined heuristic optimization technique publication-title: Adv. Eng. Softw. – volume: 139 start-page: 17 year: 2021 end-page: 23 ident: b0205 article-title: Convergence of the rmsprop deep learning method with penalty for nonconvex optimization publication-title: Neural Netw. – year: 2021 ident: b0020 article-title: A survey on modern trainable activation functions publication-title: Neural Netw. – volume: 247 start-page: 72 year: 2013 end-page: 93 ident: b0045 article-title: Enhancing distributed differential evolution with multicultural migration for global numerical optimization publication-title: Inf. Sci. – volume: 22 start-page: 103 year: 2015 end-page: 107 ident: b0040 article-title: Multicriteria adaptive differential evolution for global numerical optimization publication-title: Integr. Comput.-Aided Eng. – year: 2017 ident: b0180 article-title: Divide the gradient by a running average of its recent magnitude. coursera: neural networks for machine learning – reference: G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2261–2269. – volume: 5 start-page: 185 year: 1993 end-page: 196 ident: b0010 article-title: Backpropagation and stochastic gradient descent method publication-title: Neurocomputing – volume: 60 start-page: 97 year: 2017 end-page: 116 ident: b0150 article-title: Metaheuristic design of feedforward neural networks: a review of two decades of research publication-title: Eng. Appl. Artif. Intell. – reference: J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of ICNN’95-international Conference on Neural Networks, vol. 4, 1995, pp. 1942–1948. – volume: 14 start-page: 500 year: 2010 end-page: 517 ident: b0165 article-title: Benefits of a population: five mechanisms that advantage population-based algorithms publication-title: IEEE Trans. Evol. Comput. – volume: 22 start-page: 2935 year: 2018 end-page: 2952 ident: b0210 article-title: A self-adaptive artificial bee colony algorithm based on global best for global optimization publication-title: Soft. Comput. – reference: D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, in: The International Conference on Learning Representations, 2015, p. 13. – volume: 373 start-page: 404 year: 2016 end-page: 418 ident: b0005 article-title: A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights publication-title: Inf. Sci. – volume: 10 start-page: 1188 year: 2010 end-page: 1199 ident: b0120 article-title: Two hybrid differential evolution algorithms for engineering design optimization publication-title: Appl. Soft Comput. – reference: S. Ruder, An overview of gradient descent optimization algorithms, arXiv preprint arXiv:1609.04747, 2016. – volume: 149 start-page: 29 year: 2015 end-page: 38 ident: b0250 article-title: Evolved neural network ensemble by multiple heterogeneous swarm intelligence publication-title: Neurocomputing – volume: 14 start-page: 1187 year: 2010 end-page: 1207 ident: b0200 article-title: Scale factor inheritance mechanism in distributed differential evolution publication-title: Soft. Comput. – reference: D. Dua, C. Graff, UCI machine learning repository. Tech. rep., University of California, Irvine, School of Information and Computer Sciences, 2017. – volume: 185 start-page: 1026 year: 2007 end-page: 1037 ident: b0245 article-title: A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training publication-title: Appl. Math. Comput. – volume: 145 start-page: 80 year: 2022 end-page: 89 ident: b0105 article-title: Meta-learning, social cognition and consciousness in brains and machines publication-title: Neural Netw. – volume: 33 start-page: 61 year: 2010 end-page: 106 ident: b0145 article-title: Recent advances in differential evolution: a survey and experimental analysis publication-title: Artif. Intell. Rev. – year: 2021 ident: b0230 article-title: A gradient-guided evolutionary approach to training deep neural networks publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 139 start-page: 17 year: 2021 ident: 10.1016/j.ins.2022.06.036_b0205 article-title: Convergence of the rmsprop deep learning method with penalty for nonconvex optimization publication-title: Neural Netw. doi: 10.1016/j.neunet.2021.02.011 – volume: 60 start-page: 97 year: 2017 ident: 10.1016/j.ins.2022.06.036_b0150 article-title: Metaheuristic design of feedforward neural networks: a review of two decades of research publication-title: Eng. Appl. Artif. Intell. doi: 10.1016/j.engappai.2017.01.013 – volume: 367 start-page: 890 year: 2016 ident: 10.1016/j.ins.2022.06.036_b0035 article-title: A grid-based adaptive multi-objective differential evolution algorithm publication-title: Inf. Sci. doi: 10.1016/j.ins.2016.07.009 – volume: 32 start-page: 13051 issue: 16 year: 2020 ident: 10.1016/j.ins.2022.06.036_b0185 article-title: Evolving deep neural networks using coevolutionary algorithms with multi-population strategy publication-title: Neural Comput. Appl. doi: 10.1007/s00521-020-04749-2 – ident: 10.1016/j.ins.2022.06.036_b0025 – ident: 10.1016/j.ins.2022.06.036_b0115 – volume: 36 start-page: 11 issue: 1 year: 2005 ident: 10.1016/j.ins.2022.06.036_b0175 article-title: A combined heuristic optimization technique publication-title: Adv. Eng. Softw. doi: 10.1016/j.advengsoft.2003.12.001 – volume: 545 start-page: 403 year: 2021 ident: 10.1016/j.ins.2022.06.036_b0190 article-title: A new approach for classifying coronavirus covid-19 based on its manifestation on chest x-rays using texture features and neural networks publication-title: Inf. Sci. doi: 10.1016/j.ins.2020.09.041 – ident: 10.1016/j.ins.2022.06.036_b0075 doi: 10.1109/ICCV.2005.239 – year: 2021 ident: 10.1016/j.ins.2022.06.036_b0230 article-title: A gradient-guided evolutionary approach to training deep neural networks – volume: 13 start-page: 1 issue: 5 year: 2019 ident: 10.1016/j.ins.2022.06.036_b0220 article-title: Self-adaptive particle swarm optimization for large-scale feature selection in classification publication-title: ACM Trans. Knowl. Discov. Data doi: 10.1145/3340848 – volume: 5 start-page: 185 issue: 4–5 year: 1993 ident: 10.1016/j.ins.2022.06.036_b0010 article-title: Backpropagation and stochastic gradient descent method publication-title: Neurocomputing doi: 10.1016/0925-2312(93)90006-O – volume: 43 start-page: 150 issue: 1 year: 2015 ident: 10.1016/j.ins.2022.06.036_b0130 article-title: How effective is the grey wolf optimizer in training multi-layer perceptrons publication-title: Appl. Intell. doi: 10.1007/s10489-014-0645-7 – volume: 22 start-page: 2935 issue: 9 year: 2018 ident: 10.1016/j.ins.2022.06.036_b0210 article-title: A self-adaptive artificial bee colony algorithm based on global best for global optimization publication-title: Soft. Comput. doi: 10.1007/s00500-017-2547-1 – start-page: 9 year: 1998 ident: 10.1016/j.ins.2022.06.036_b0030 article-title: Online algorithms and stochastic approximations – volume: 14 start-page: 500 issue: 4 year: 2010 ident: 10.1016/j.ins.2022.06.036_b0165 article-title: Benefits of a population: five mechanisms that advantage population-based algorithms publication-title: IEEE Trans. Evol. Comput. doi: 10.1109/TEVC.2009.2039139 – volume: 30 start-page: 3707 issue: 12 year: 2018 ident: 10.1016/j.ins.2022.06.036_b0015 article-title: A levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training publication-title: Neural Comput. Appl. doi: 10.1007/s00521-017-2952-5 – ident: 10.1016/j.ins.2022.06.036_b0110 – volume: 33 start-page: 61 issue: 1–2 year: 2010 ident: 10.1016/j.ins.2022.06.036_b0145 article-title: Recent advances in differential evolution: a survey and experimental analysis publication-title: Artif. Intell. Rev. doi: 10.1007/s10462-009-9137-2 – year: 2019 ident: 10.1016/j.ins.2022.06.036_b0140 – volume: 33 start-page: 2561 issue: 7 year: 2021 ident: 10.1016/j.ins.2022.06.036_b0070 article-title: Memetic algorithms for training feedforward neural networks: an approach based on gravitational search algorithm publication-title: Neural Comput. Appl. doi: 10.1007/s00521-020-05131-y – year: 2017 ident: 10.1016/j.ins.2022.06.036_b0180 – volume: 14 start-page: 1187 issue: 11 year: 2010 ident: 10.1016/j.ins.2022.06.036_b0200 article-title: Scale factor inheritance mechanism in distributed differential evolution publication-title: Soft. Comput. doi: 10.1007/s00500-009-0510-5 – year: 2021 ident: 10.1016/j.ins.2022.06.036_b0020 article-title: A survey on modern trainable activation functions publication-title: Neural Netw. doi: 10.1016/j.neunet.2021.01.026 – volume: 13 start-page: 1528 issue: 3 year: 2013 ident: 10.1016/j.ins.2022.06.036_b0240 article-title: A hybrid approach based on differential evolution and tissue membrane systems for solving constrained manufacturing parameter optimization problems publication-title: Appl. Soft Comput. doi: 10.1016/j.asoc.2012.05.032 – start-page: 1 year: 2021 ident: 10.1016/j.ins.2022.06.036_b0090 article-title: Appropriate learning rates of adaptive learning rate optimization algorithms for training deep neural networks publication-title: IEEE Trans. Cybern. – volume: 26 start-page: 1835 issue: 3 year: 2016 ident: 10.1016/j.ins.2022.06.036_b0235 article-title: On the convergence of decentralized gadient descent publication-title: Siam J. Optim. doi: 10.1137/130943170 – year: 2022 ident: 10.1016/j.ins.2022.06.036_b0215 article-title: A self-adaptive gradient descent search algorithm for fully-connected neural networks publication-title: Neurocomputing – ident: 10.1016/j.ins.2022.06.036_b0080 doi: 10.1109/CVPR.2017.243 – ident: 10.1016/j.ins.2022.06.036_b0100 – volume: 14 start-page: 782 issue: 5 year: 2010 ident: 10.1016/j.ins.2022.06.036_b0155 article-title: Population-based algorithm portfolios for numerical optimization publication-title: IEEE Trans. Evol. Comput. doi: 10.1109/TEVC.2010.2040183 – ident: 10.1016/j.ins.2022.06.036_b0050 – ident: 10.1016/j.ins.2022.06.036_b0170 – ident: 10.1016/j.ins.2022.06.036_b0055 – volume: 73 year: 2021 ident: 10.1016/j.ins.2022.06.036_b0060 article-title: Forecasting short-term electricity load using hybrid support vector regression with grey catastrophe and random forest modeling publication-title: Util. Policy doi: 10.1016/j.jup.2021.101294 – volume: 145 start-page: 80 year: 2022 ident: 10.1016/j.ins.2022.06.036_b0105 article-title: Meta-learning, social cognition and consciousness in brains and machines publication-title: Neural Netw. doi: 10.1016/j.neunet.2021.10.004 – volume: 18 start-page: 232 year: 2014 ident: 10.1016/j.ins.2022.06.036_b0195 article-title: Differential evolution based on covariance matrix learning and bimodal distribution parameter setting publication-title: Appl. Soft Comput. doi: 10.1016/j.asoc.2014.01.038 – volume: 10 start-page: 1188 issue: 4 year: 2010 ident: 10.1016/j.ins.2022.06.036_b0120 article-title: Two hybrid differential evolution algorithms for engineering design optimization publication-title: Appl. Soft Comput. doi: 10.1016/j.asoc.2010.05.007 – volume: 247 start-page: 72 year: 2013 ident: 10.1016/j.ins.2022.06.036_b0045 article-title: Enhancing distributed differential evolution with multicultural migration for global numerical optimization publication-title: Inf. Sci. doi: 10.1016/j.ins.2013.06.011 – volume: 50 start-page: 1510 issue: 5 year: 2020 ident: 10.1016/j.ins.2022.06.036_b0125 article-title: A multi-population differential evolution with best-random mutation strategy for large-scale global optimization publication-title: Appl. Intell. doi: 10.1007/s10489-019-01613-2 – ident: 10.1016/j.ins.2022.06.036_b0095 doi: 10.1109/ICNN.1995.488968 – volume: 22 start-page: 103 issue: 2 year: 2015 ident: 10.1016/j.ins.2022.06.036_b0040 article-title: Multicriteria adaptive differential evolution for global numerical optimization publication-title: Integr. Comput.-Aided Eng. doi: 10.3233/ICA-150481 – volume: 384 start-page: 34 year: 2017 ident: 10.1016/j.ins.2022.06.036_b0160 article-title: Swarm intelligence and evolutionary algorithms: performance versus speed publication-title: Inf. Sci. doi: 10.1016/j.ins.2016.12.028 – volume: 149 start-page: 29 year: 2015 ident: 10.1016/j.ins.2022.06.036_b0250 article-title: Evolved neural network ensemble by multiple heterogeneous swarm intelligence publication-title: Neurocomputing doi: 10.1016/j.neucom.2013.12.062 – volume: 185 start-page: 1026 issue: 2 year: 2007 ident: 10.1016/j.ins.2022.06.036_b0245 article-title: A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training publication-title: Appl. Math. Comput. – ident: 10.1016/j.ins.2022.06.036_b0085 doi: 10.1007/978-3-662-45523-4_50 – volume: 231 start-page: 329 year: 2014 ident: 10.1016/j.ins.2022.06.036_b0225 article-title: An ensemble algorithm with self-adaptive learning techniques for high-dimensional numerical optimization publication-title: Appl. Math. Comput. – volume: 373 start-page: 404 year: 2016 ident: 10.1016/j.ins.2022.06.036_b0005 article-title: A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights publication-title: Inf. Sci. doi: 10.1016/j.ins.2016.09.016 – ident: 10.1016/j.ins.2022.06.036_b0065 doi: 10.1109/CEC.2000.870802 – ident: 10.1016/j.ins.2022.06.036_b0135 doi: 10.1145/3449726.3463132 |
| SSID | ssj0004766 |
| Score | 2.6642458 |
| Snippet | Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs).... |
| SourceID | crossref elsevier |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 453 |
| SubjectTerms | Differential algorithms Ensemble algorithms Feed-forward neural networks Gradient-based algorithms Multi-populations |
| Title | An ensemble of differential evolution and Adam for training feed-forward neural networks |
| URI | https://dx.doi.org/10.1016/j.ins.2022.06.036 |
| Volume | 608 |
| WOSCitedRecordID | wos000864028900004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1872-6291 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0004766 issn: 0020-0255 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1JT9wwFH6iQw_toWJpVVpAPqAeOoqUeJw4PkYIBBwQB5DCKYoTWwJBQLMgfn6fYztjsQkOvUQja-xk5vvit_gtAHvaJCvKNInaNDHeqkZEUtR1RIWmueY01ZL1zSb46WleluLMxenO-nYCvOvyx0dx_1-hxjEE26TOfgDuYVEcwM8IOl4Rdry-C_iiG6Npqm6lDRv0HVDmxjWuHtyt-zODoq1vbZih6xMx1ijLIhwxobRjU-oS53Q2UHwWqrEuialfycnQQTcvF72P9HKxdGDb_eTy6sbLyd77bHPcD9V0atAN3Q9oufrgt2U6QBwZwyTcUrM4DzZFZssBO_nKbMuVZ1u39SJco71hqqhT2pdVnbxQJvuJ-BqCCn282nWFS1RmicrE7E2yT7BKOdpVI1gtjg_Kk2XeLLdn2f4n-FPvPv7vyXO8rLcEusj5GnxzRgQpLPjrsKK6DfgalJbcgB2XkEL-kAAs4rbyTSiLjniakDtNQpqQgSYEaUIMTQguQTxNSEgTYmlCPE2-w8Xhwfn-UeS6bEQNFXwetRKVcJ7oSS0ySmVbK6FUInOZ4CtcJ41qZUrzhqIq26g8SwVrBGszFKIUlVlUaH_AqLvr1E8gWRsrtN_RpEXNj2tRs0kmMy4ZY3nNYr0Fsf8Pq8aVoDePflO9it0W_B2m3Nv6K299mXlgKkd-qxhWSLLXp_36yD1-w5fla7ANo_l0oXbgc_Mwv5pNdx3D_gGhkJEr |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=An+ensemble+of+differential+evolution+and+Adam+for+training+feed-forward+neural+networks&rft.jtitle=Information+sciences&rft.au=Xue%2C+Yu&rft.au=Tong%2C+Yiling&rft.au=Neri%2C+Ferrante&rft.date=2022-08-01&rft.issn=0020-0255&rft.volume=608&rft.spage=453&rft.epage=471&rft_id=info:doi/10.1016%2Fj.ins.2022.06.036&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_ins_2022_06_036 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0020-0255&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0020-0255&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0020-0255&client=summon |