MetaPerceptron: A standardized framework for metaheuristic-driven multi-layer perceptron optimization

•MetaPerceptron: an user-friendly and comprehensive metaheuristic-based MLP framework.•Supports regression and classification tasks with more than 200 metaheuristic algorithms.•Comprehensive resources: examples, documentation, and test cases for users.•It is user-friendly interface allows non-coders...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Computer standards and interfaces Ročník 93; s. 103977
Hlavní autoři: Thieu, Nguyen Van, Mirjalili, Seyedali, Garg, Harish, Hoang, Nguyen Thanh
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.04.2025
Témata:
ISSN:0920-5489
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•MetaPerceptron: an user-friendly and comprehensive metaheuristic-based MLP framework.•Supports regression and classification tasks with more than 200 metaheuristic algorithms.•Comprehensive resources: examples, documentation, and test cases for users.•It is user-friendly interface allows non-coders to solve problems with minimal code.•Flexible framework; users can easily modify algorithms, code and replace datasets. The multi-layer perceptron (MLP) remains a foundational architecture within neural networks, widely recognized for its ability to model complex, non-linear relationships between inputs and outputs. Despite its success, MLP training processes often face challenges like susceptibility to local optima and overfitting when relying on traditional gradient descent optimization. Metaheuristic algorithms (MHAs) have recently emerged as robust alternatives for optimizing MLP training, yet no current package offers a comprehensive, standardized framework for MHA-MLP hybrid models. This paper introduces MetaPerceptron, an standardized open-source Python framework designed to integrate MHAs with MLPs seamlessly, supporting both regression and classification tasks. MetaPerceptron is built on top of PyTorch, Scikit-Learn, and Mealpy. Through this design, MetaPerceptron promotes standardization in MLP optimization, incorporating essential machine learning utilities such as model forecasting, feature selection, hyperparameter tuning, and pipeline creation. By offering over 200 MHAs, MetaPerceptron empowers users to experiment across a broad array of metaheuristic optimization techniques without reimplementation. This framework significantly enhances accessibility, adaptability, and consistency in metaheuristic-trained neural network research and applications, positioning it as a valuable resource for machine learning, data science, and computational optimization. The entire source code is freely available on Github: https://github.com/thieu1995/MetaPerceptron
ISSN:0920-5489
DOI:10.1016/j.csi.2025.103977