GPU‐accelerated and mixed norm regularized online extreme learning machine

Summary Extreme learning machine (ELM) is a prominent example of neural network with its fast training speed, and good prediction performance. An online version of ELM called online sequential extreme learning machine (OS‐ELM) has also been proposed for the sequential training. Combined with the nee...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Concurrency and computation Ročník 34; číslo 15
Hlavní autoři: Polat, Önder, Kayhan, Sema Koç
Médium: Journal Article
Jazyk:angličtina
Vydáno: Hoboken Wiley Subscription Services, Inc 10.07.2022
Témata:
ISSN:1532-0626, 1532-0634
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Summary Extreme learning machine (ELM) is a prominent example of neural network with its fast training speed, and good prediction performance. An online version of ELM called online sequential extreme learning machine (OS‐ELM) has also been proposed for the sequential training. Combined with the need for regularization to prevent over‐fitting in addition to the large number of neurons required in the hidden layer, OS‐ELM demands huge amount of computation power for the large‐scale data. In this article, a mixed norm (l2,1$$ {l}_{2,1} $$) regularized online machine learning algorithm (MRO‐ELM) that is based on alternating direction method of multipliers (ADMM) is proposed. A linear combination of the mixed norm and the Frobenius norm regularization is applied using the ADMM framework and update formulas are derived. Graphics processing unit (GPU) accelerated version of MRO‐ELM (GPU‐MRO‐ELM) is also proposed to reduce the training time by processing appropriate parts in parallel using the implemented custom kernels. In addition, a novel automatic hyper‐parameter tuning method is incorporated to GPU‐MRO‐ELM using progressive validation with GPU acceleration. The experimental results show that the MRO‐ELM algorithm and its GPU version outperform OS‐ELM in terms of training speed, and testing accuracy. Also, compared to the cross validation, the proposed automatic hyper‐parameter tuning demonstrates dramatical reduction in the tuning time.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.6967