Building an Ensemble of Fine-Tuned Naive Bayesian Classifiers for Text Classification

Text classification is one domain in which the naive Bayesian (NB) learning algorithm performs remarkably well. However, making further improvement in performance using ensemble-building techniques proved to be a challenge because NB is a stable algorithm. This work shows that, while an ensemble of...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Entropy (Basel, Switzerland) Ročník 20; číslo 11; s. 857
Hlavní autoři: El Hindi, Khalil, AlSalman, Hussien, Qasem, Safwan, Al Ahmadi, Saad
Médium: Journal Article
Jazyk:angličtina
Vydáno: Basel MDPI AG 07.11.2018
MDPI
Témata:
ISSN:1099-4300, 1099-4300
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Text classification is one domain in which the naive Bayesian (NB) learning algorithm performs remarkably well. However, making further improvement in performance using ensemble-building techniques proved to be a challenge because NB is a stable algorithm. This work shows that, while an ensemble of NB classifiers achieves little or no improvement in terms of classification accuracy, an ensemble of fine-tuned NB classifiers can achieve a remarkable improvement in accuracy. We propose a fine-tuning algorithm for text classification that is both more accurate and less stable than the NB algorithm and the fine-tuning NB (FTNB) algorithm. This improvement makes it more suitable than the FTNB algorithm for building ensembles of classifiers using bagging. Our empirical experiments, using 16-benchmark text-classification data sets, show significant improvement for most data sets.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1099-4300
1099-4300
DOI:10.3390/e20110857