Topic specificity: A descriptive metric for algorithm selection and finding the right number of topics

Topic modeling is a prevalent task for discovering the latent structure of a corpus, identifying a set of topics that represent the underlying themes of the documents. Despite its popularity, issues with its evaluation metric, the coherence score, result in two common challenges: algorithm selection...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Natural Language Processing Journal Ročník 8; s. 100082
Hlavní autoři: Rijcken, Emil, Zervanou, Kalliopi, Mosteiro, Pablo, Scheepers, Floortje, Spruit, Marco, Kaymak, Uzay
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier 01.09.2024
Témata:
ISSN:2949-7191, 2949-7191
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Topic modeling is a prevalent task for discovering the latent structure of a corpus, identifying a set of topics that represent the underlying themes of the documents. Despite its popularity, issues with its evaluation metric, the coherence score, result in two common challenges: algorithm selection and determining the number of topics. To address these two issues, we propose the topic specificity metric, which captures the relative frequency of topic words in the corpus and is used as a proxy for the specificity of a word. In this work, we formulate the metric firstly. Secondly, we demonstrate that algorithms train topics at different specificity levels. This insight can be used to address algorithm selection as it allows users to distinguish and select algorithms with the desired specificity level. Lastly, we show a strictly positive monotonic correlation between the topic specificity and the number of topics for LDA, FLSA-W, NMF and LSI. This correlation can be used to address the selection of the number of topics, as it allows users to adjust the number of topics to their desired level. Moreover, our descriptive metric provides a new perspective to characterize topic models, allowing them to be understood better.
ISSN:2949-7191
2949-7191
DOI:10.1016/j.nlp.2024.100082