Adaptive learning algorithm and its convergence analysis with complex-valued error loss network

In machine learning, the initial task is to construct a model that is capable of predicting the outcomes of new samples with the help of training samples. The loss function plays a key role in this task, as it acts as an important indicator to evaluate the overall model prediction performance. Build...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural networks Ročník 190; s. 107677
Hlavní autoři: Qian, Guobing, Lin, Bingqing, Mei, Jiaojiao, Qian, Junhui, Wang, Shiyuan
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States Elsevier Ltd 01.10.2025
Témata:
ISSN:0893-6080, 1879-2782, 1879-2782
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In machine learning, the initial task is to construct a model that is capable of predicting the outcomes of new samples with the help of training samples. The loss function plays a key role in this task, as it acts as an important indicator to evaluate the overall model prediction performance. Building upon the work of Chen et al., this study introduces a novel model named as the Complex Error Loss Network (CELN). The CELN is designed to address the scenarios involving complex-valued signals and parameters within the context of supervised learning. Leveraging the contraction mapping theorem, this study investigates the convergence of the corresponding adaptive learning algorithm, underscoring the inherent capability of CELN to consistently approach and potentially reach the global minimum or optimal solution through iterative methods. CELN reduces the error by at least 4.1% compared to benchmark methods while maintaining stability in non-Gaussian noise scenarios.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2025.107677