Exploration and generalization in deep learning with SwitchPath activations

This work provides a comprehensive theoretical and empirical analysis of SwitchPath, a stochastic activation function that improves learning dynamics by probabilistically toggling between a neuron standard activation and its negation. We develop theoretical foundations and demonstrate its impact in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning Jg. 114; H. 9; S. 200
Hauptverfasser: Di Cecco, Antonio, Papini, Andrea, Metta, Carlo, Fantozzi, Marco, Galfrè, Silvia Giulia, Morandin, Francesco, Parton, Maurizio
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Dordrecht Springer Nature B.V 01.09.2025
Schlagworte:
ISSN:0885-6125, 1573-0565, 1573-0565
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This work provides a comprehensive theoretical and empirical analysis of SwitchPath, a stochastic activation function that improves learning dynamics by probabilistically toggling between a neuron standard activation and its negation. We develop theoretical foundations and demonstrate its impact in multiple scenarios. By maintaining gradient flow and injecting controlled stochasticity, the method improves generalization, uncertainty estimation, and training efficiency. Experiments in classification show consistent gains over ReLU and Leaky ReLU across CNNs and Vision Transformers, with reduced overfitting and better test accuracy. In generative modeling, a novel two-phase training scheme significantly mitigates mode collapse and accelerates convergence. Our theoretical analysis reveals that SwitchPath introduces a form of multiplicative noise that acts as a structural regularizer. Additional empirical investigations show improved information propagation and reduced model complexity. These results establish this activation mechanism as a simple yet effective way to enhance exploration, regularization, and reliability in modern neural networks.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0885-6125
1573-0565
1573-0565
DOI:10.1007/s10994-025-06840-y