Fractional-order differential evolution for training dendritic neuron model
Dendritic neuron model (DNM) has attracted widespread attention for emulating biological neurons’ complex information processing, but its performance is severely limited by error backpropagation (BP) which suffers from local minima, saddle points, and sensitivity to initial parameters. To address th...
Uloženo v:
| Vydáno v: | The Journal of supercomputing Ročník 81; číslo 16; s. 1543 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
Springer Nature B.V
08.11.2025
|
| Témata: | |
| ISSN: | 1573-0484, 0920-8542, 1573-0484 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Dendritic neuron model (DNM) has attracted widespread attention for emulating biological neurons’ complex information processing, but its performance is severely limited by error backpropagation (BP) which suffers from local minima, saddle points, and sensitivity to initial parameters. To address this, this paper proposes the combination of the fractional derivative and the differential evolution algorithm to train DNM, referred to as fractional-order differential evolution (FODE) algorithm. Based on the power law memorization of fractional calculus, the mutation step of differential evolution is improved to incorporate adaptiveness and adopt fractional individuals, leading to a more focused and effective exploration of the search space. Furthermore, we adopt a robust mutation strategy guided by elite individuals and an external archive to enhance population diversity, while retaining the dynamic crossover strategy based on the Beta distribution, breaking the limitation of simple linear combinations. Another improvement in FODE is the dynamic treatment of the mutation parameter F and crossover probability CR, enhancing the search efficiency and global optimization performance. Given the computational intensity of fractional-order operations and population-based evolutionary optimization, this work relies on high-performance computing (HPC) resources for parallelization and accelerated experimentation. By comparing with other eleven algorithms on twelve datasets, it is proved that FODE is a training method for DNM with great advantages, which opens new possibilities for the study of fractional-order theory on intelligent algorithms and neural networks and underscores the necessity of supercomputing in complex algorithm design. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1573-0484 0920-8542 1573-0484 |
| DOI: | 10.1007/s11227-025-08004-0 |