An adaptive grey wolf optimization with differential evolution operator for solving the discount {0–1} knapsack problem

The discount {0–1} knapsack problem (D {0–1} KP) is a new variant of the knapsack problem. It is an NP-hard problem and also a binary optimization problem. As a new intelligent algorithm that imitates the leadership function of wolves, the grey wolf optimizer (GWO) can solve NP problems more effecti...

Full description

Saved in:
Bibliographic Details
Published in:Neural computing & applications Vol. 37; no. 27; pp. 22369 - 22385
Main Authors: Wang, Zijian, Fang, Xi, Gao, Fei, Xie, Liang, Meng, Xianchen
Format: Journal Article
Language:English
Published: London Springer London 01.09.2025
Springer Nature B.V
Subjects:
ISSN:0941-0643, 1433-3058
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The discount {0–1} knapsack problem (D {0–1} KP) is a new variant of the knapsack problem. It is an NP-hard problem and also a binary optimization problem. As a new intelligent algorithm that imitates the leadership function of wolves, the grey wolf optimizer (GWO) can solve NP problems more effectively than accurate algorithms. At the same time, the GWO has fewer parameters, faster calculations, and easier implementation than other intelligent algorithms. This paper introduces a method of adaptively updating the prey position of wolves and a differential evolution operator with a scaling factor that adaptively changes according to the number of iterations, and selects which operator to use for iteration by the value of the search agent parameter. Finally, it combines the improved greedy repair operator based on D {0–1} KP to form the adaptive grey wolf optimization with differential evolution operator (de-AGWO). The experimental results of the standard test function prove that the algorithm in this paper has a significant improvement in function optimization performance. And the experimental results of D {0–1} KP shows that the proposed algorithm yields superior solution outcomes, except for unrelated datasets, and exhibits significant advantages when solving strongly correlated datasets. Finally, it is verified that more than 80% of the iterations utilize the grey wolf evolution operator, highlighting that the core of the algorithm remains the GWO.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-023-09075-x