An adaptive grey wolf optimization with differential evolution operator for solving the discount {0–1} knapsack problem

The discount {0–1} knapsack problem (D {0–1} KP) is a new variant of the knapsack problem. It is an NP-hard problem and also a binary optimization problem. As a new intelligent algorithm that imitates the leadership function of wolves, the grey wolf optimizer (GWO) can solve NP problems more effecti...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural computing & applications Ročník 37; číslo 27; s. 22369 - 22385
Hlavní autoři: Wang, Zijian, Fang, Xi, Gao, Fei, Xie, Liang, Meng, Xianchen
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Springer London 01.09.2025
Springer Nature B.V
Témata:
ISSN:0941-0643, 1433-3058
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The discount {0–1} knapsack problem (D {0–1} KP) is a new variant of the knapsack problem. It is an NP-hard problem and also a binary optimization problem. As a new intelligent algorithm that imitates the leadership function of wolves, the grey wolf optimizer (GWO) can solve NP problems more effectively than accurate algorithms. At the same time, the GWO has fewer parameters, faster calculations, and easier implementation than other intelligent algorithms. This paper introduces a method of adaptively updating the prey position of wolves and a differential evolution operator with a scaling factor that adaptively changes according to the number of iterations, and selects which operator to use for iteration by the value of the search agent parameter. Finally, it combines the improved greedy repair operator based on D {0–1} KP to form the adaptive grey wolf optimization with differential evolution operator (de-AGWO). The experimental results of the standard test function prove that the algorithm in this paper has a significant improvement in function optimization performance. And the experimental results of D {0–1} KP shows that the proposed algorithm yields superior solution outcomes, except for unrelated datasets, and exhibits significant advantages when solving strongly correlated datasets. Finally, it is verified that more than 80% of the iterations utilize the grey wolf evolution operator, highlighting that the core of the algorithm remains the GWO.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-023-09075-x