Online Deterministic Annealing for Classification and Clustering

Inherent in virtually every iterative machine learning algorithm is the problem of hyperparameter tuning, which includes three major design parameters: 1) the complexity of the model, e.g., the number of neurons in a neural network; 2) the initial conditions, which heavily affect the behavior of the...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems Vol. 34; no. 10; pp. 7125 - 7134
Main Authors: Mavridis, Christos N., Baras, John S.
Format: Journal Article
Language:English
Published: United States IEEE 01.10.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2162-237X, 2162-2388, 2162-2388
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Inherent in virtually every iterative machine learning algorithm is the problem of hyperparameter tuning, which includes three major design parameters: 1) the complexity of the model, e.g., the number of neurons in a neural network; 2) the initial conditions, which heavily affect the behavior of the algorithm; and 3) the dissimilarity measure used to quantify its performance. We introduce an online prototype-based learning algorithm that can be viewed as a progressively growing competitive-learning neural network architecture for classification and clustering. The learning rule of the proposed approach is formulated as an online gradient-free stochastic approximation algorithm that solves a sequence of appropriately defined optimization problems, simulating an annealing process. The annealing nature of the algorithm contributes to avoiding poor local minima, offers robustness with respect to the initial conditions, and provides a means to progressively increase the complexity of the learning model, through an intuitive bifurcation phenomenon. The proposed approach is interpretable, requires minimal hyperparameter tuning, and allows online control over the performance-complexity tradeoff. Finally, we show that Bregman divergences appear naturally as a family of dissimilarity measures that play a central role in both the performance and the computational complexity of the learning algorithm.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2021.3138676