A Deep Learning-Driven Black-Box Benchmark Generation Method via Exploratory Landscape Analysis

In the context of algorithm selection, the careful design of benchmark functions and problem instances plays a pivotal role in evaluating the performance of optimization methods. Traditional benchmark functions have been criticized for their limited resemblance to real-world problems and insufficien...

Full description

Saved in:
Bibliographic Details
Published in:Applied sciences Vol. 15; no. 15; p. 8454
Main Authors: Liang, Haoming, Zhao, Fuqing, Xu, Tianpeng, Zhang, Jianlin
Format: Journal Article
Language:English
Published: Basel MDPI AG 01.08.2025
Subjects:
ISSN:2076-3417, 2076-3417
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the context of algorithm selection, the careful design of benchmark functions and problem instances plays a pivotal role in evaluating the performance of optimization methods. Traditional benchmark functions have been criticized for their limited resemblance to real-world problems and insufficient coverage of the problem space. Exploratory landscape analysis (ELA) offers a systematic framework for characterizing objective functions, based on quantitative landscape features. This study proposes a method for generating benchmark functions tailored to single-objective continuous optimization problems with boundary constraints using predefined ELA feature vectors to guide their construction. The process begins with the creation of random decision variables and corresponding objective values, which are iteratively adjusted using the covariance matrix adaptation evolution strategy (CMA-ES) to ensure alignment with a target ELA feature vector within a specified tolerance. Once the feature criteria are met, the resulting topological map point is used to train a neural network to produce a surrogate function that retains the desired landscape characteristics. To validate the proposed approach, functions from the well-known Black Box Optimization Benchmark (BBOB) suite are replicated, and novel functions are generated with unique ELA feature combinations not found in the original suite. The experiment results demonstrate that the synthesized landscapes closely resemble their BBOB counterparts and preserve the consistency of the algorithm rankings, thereby supporting the effectiveness of the proposed approach.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app15158454