AI-Based Optimization of a Neural Discrete-Time Sliding Mode Controller via Bayesian, Particle Swarm, and Genetic Algorithms

Saved in:
Bibliographic Details
Title: AI-Based Optimization of a Neural Discrete-Time Sliding Mode Controller via Bayesian, Particle Swarm, and Genetic Algorithms
Authors: Carlos E. Castañeda
Source: Robotics ; Volume 14 ; Issue 9 ; Pages: 128
Publisher Information: Multidisciplinary Digital Publishing Institute
Publication Year: 2025
Collection: MDPI Open Access Publishing
Subject Terms: AI-based controller, neural sliding mode control, robotic manipulator, gain optimization, bayesian optimization, particle swarm optimization, genetic algorithm optimization
Description: This work introduces a unified Artificial Intelligence-based framework for the optimal tuning of gains in a neural discrete-time sliding mode controller (SMC) applied to a two-degree-of-freedom robotic manipulator. The novelty lies in combining surrogate-assisted optimization with normalized search spaces to enable a fair comparative analysis of three metaheuristic strategies: Bayesian Optimization (BO), Particle Swarm Optimization (PSO), and Genetic Algorithms (GAs). The manipulator dynamics are identified via a discrete-time recurrent high-order neural network (NN) trained online using an Extended Kalman Filter with adaptive noise covariance updates, allowing the model to accurately capture unmodeled dynamics, nonlinearities, parametric variations, and process/measurement noise. This neural representation serves as the predictive plant for the discrete-time SMC, enabling precise control of joint angular positions under sinusoidal phase-shifted references. To construct the optimization dataset, MATLAB® simulations sweep the controller gains (k0*,k1*) over a bounded physical domain, logging steady-state tracking errors. These are normalized to mitigate scaling effects and improve convergence stability. Optimization is executed in Python® using integrated scikit-learn, DEAP, and scikit-optimize routines. Simulation results reveal that all three algorithms reach high-performance gain configurations. Here, the combined cost is the normalized aggregate objective J˜ constructed from the steady-state tracking errors of both joints. Under identical experimental conditions (shared data loading/normalization and a single Python pipeline), PSO attains the lowest error in Joint 1 (7.36×10−5 rad) with the shortest runtime (23.44 s); GA yields the lowest error in Joint 2 (8.18×10−3 rad) at higher computational expense (≈69.7 s including refinement); and BO is competitive in both joints (7.81×10−5 rad, 8.39×10−3 rad) with a runtime comparable to PSO (23.65 s) while using only 50 evaluations.
Document Type: text
File Description: application/pdf
Language: English
Relation: AI in Robotics; https://dx.doi.org/10.3390/robotics14090128
DOI: 10.3390/robotics14090128
Availability: https://doi.org/10.3390/robotics14090128
Rights: https://creativecommons.org/licenses/by/4.0/
Accession Number: edsbas.F78B7662
Database: BASE
Description
Abstract:This work introduces a unified Artificial Intelligence-based framework for the optimal tuning of gains in a neural discrete-time sliding mode controller (SMC) applied to a two-degree-of-freedom robotic manipulator. The novelty lies in combining surrogate-assisted optimization with normalized search spaces to enable a fair comparative analysis of three metaheuristic strategies: Bayesian Optimization (BO), Particle Swarm Optimization (PSO), and Genetic Algorithms (GAs). The manipulator dynamics are identified via a discrete-time recurrent high-order neural network (NN) trained online using an Extended Kalman Filter with adaptive noise covariance updates, allowing the model to accurately capture unmodeled dynamics, nonlinearities, parametric variations, and process/measurement noise. This neural representation serves as the predictive plant for the discrete-time SMC, enabling precise control of joint angular positions under sinusoidal phase-shifted references. To construct the optimization dataset, MATLAB® simulations sweep the controller gains (k0*,k1*) over a bounded physical domain, logging steady-state tracking errors. These are normalized to mitigate scaling effects and improve convergence stability. Optimization is executed in Python® using integrated scikit-learn, DEAP, and scikit-optimize routines. Simulation results reveal that all three algorithms reach high-performance gain configurations. Here, the combined cost is the normalized aggregate objective J˜ constructed from the steady-state tracking errors of both joints. Under identical experimental conditions (shared data loading/normalization and a single Python pipeline), PSO attains the lowest error in Joint 1 (7.36×10−5 rad) with the shortest runtime (23.44 s); GA yields the lowest error in Joint 2 (8.18×10−3 rad) at higher computational expense (≈69.7 s including refinement); and BO is competitive in both joints (7.81×10−5 rad, 8.39×10−3 rad) with a runtime comparable to PSO (23.65 s) while using only 50 evaluations.
DOI:10.3390/robotics14090128