Biasing the transition of Bayesian optimization algorithm between Markov chain states in dynamic environments

When memory-based evolutionary algorithms are applied in dynamic environments, the certainly use of uncertain prior knowledge for future environments may mislead the evolutionary algorithms. To address this problem, this paper presents a new, memory-based evolutionary approach for applying the Bayes...

Full description

Saved in:
Bibliographic Details
Published in:Information sciences Vol. 334-335; pp. 44 - 64
Main Authors: Kaedi, Marjan, Ghasem-Aghaee, Nasser, Ahn, Chang Wook
Format: Journal Article
Language:English
Published: Elsevier Inc 20.03.2016
Subjects:
ISSN:0020-0255, 1872-6291
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:When memory-based evolutionary algorithms are applied in dynamic environments, the certainly use of uncertain prior knowledge for future environments may mislead the evolutionary algorithms. To address this problem, this paper presents a new, memory-based evolutionary approach for applying the Bayesian optimization algorithm (BOA) in dynamic environments. Our proposed method, unlike existing memory-based methods, uses the knowledge of former environments probabilistically in future environments. For this purpose, the run of BOA is modeled as the movements in a Markov chain, in which the states become the Bayesian networks that are learned in every generation. When the environment changes, a stationary distribution of the Markov chain is defined on the basis of the retrieved prior knowledge. Then, the transition probabilities of BOA in the Markov chain are modified (biased) to comply with the defined stationary distribution. To this end, we employ the Metropolis algorithm and modify the K2 algorithm for learning the Bayesian network in BOA in order to reflect the obtained transition probabilities. Experimental results show that the proposed method achieves improved performance compared to conventional methods, especially in random environments.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2015.11.030