Entropy Regularized Likelihood Learning on Gaussian Mixture: Two Gradient Implementations for Automatic Model Selection

In Gaussian mixture modeling, it is crucial to select the number of Gaussians or mixture model for a sample data set. Under regularization theory, we aim to solve this kind of model selection problem through implementing entropy regularized likelihood (ERL) learning on Gaussian mixture via a batch g...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural processing letters Ročník 25; číslo 1; s. 17 - 30
Hlavní autor: Lu, Zhiwu
Médium: Journal Article
Jazyk:angličtina
Vydáno: Dordrecht Springer 01.02.2007
Springer Nature B.V
Témata:
ISSN:1370-4621, 1573-773X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In Gaussian mixture modeling, it is crucial to select the number of Gaussians or mixture model for a sample data set. Under regularization theory, we aim to solve this kind of model selection problem through implementing entropy regularized likelihood (ERL) learning on Gaussian mixture via a batch gradient learning algorithm. It is demonstrated by the simulation experiments that this gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly. We further give an adaptive gradient implementation of the ERL learning on Gaussian mixture followed with theoretic analysis, and find a mechanism of generalized competitive learning implied in the ERL learning.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1370-4621
1573-773X
DOI:10.1007/s11063-006-9028-3