Distributed Bayesian optimisation framework for deep neuroevolution

Neuroevolution is a machine learning method for evolving neural networks parameters and topology with a high degree of flexibility that makes them applicable to a wide range of architectures. Neuroevolution has been popular in reinforcement learning and has also shown to be promising for deep learni...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neurocomputing (Amsterdam) Ročník 470; s. 51 - 65
Hlavní autoři: Chandra, Rohitash, Tiwari, Animesh
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 22.01.2022
Témata:
ISSN:0925-2312, 1872-8286
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Neuroevolution is a machine learning method for evolving neural networks parameters and topology with a high degree of flexibility that makes them applicable to a wide range of architectures. Neuroevolution has been popular in reinforcement learning and has also shown to be promising for deep learning. The major feature of Bayesian optimisation is in reducing computational load by approximating the actual model with an acquisition function (surrogate model) that is computationally cheaper. A major limitation of neuroevolution is the high computational time required for convergence since learning (evolution) typically does not utilize gradient information. Bayesian optimisation, which is also known as surrogate-assisted optimisation, has been popular for expensive engineering optimisation problems and hyper-parameter tuning in machine learning. It has potential for training deep learning models via neuroevolution given large datasets and complex models. Recent advances in parallel and distributed computing have enabled efficient implementation of neuroevolution for complex and computationally expensive neural models. In this paper, we present a Bayesian optimisation framework for deep neuroevolution using a distributed architecture to provide computational efficiency in training. Our results demonstrate promising results for simple to deep neural network models such as convolutional neural networks which motivates further applications.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2021.10.045