Probabilistic Keyphrase Generation From Copy and Generating Spaces

Keyphrase generation is one of the most fundamental tasks in natural language processing (NLP). Most existing works on keyphrase generation mainly focus on using holistic distribution to optimize the negative log-likelihood loss, but they do not directly manipulate the copy and generating spaces, wh...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transaction on neural networks and learning systems Ročník 35; číslo 11; s. 15956 - 15970
Hlavní autoři: Yao, Yu, Yang, Peng, Zhao, Guangzhen, Ge, Yanyan, Yang, Ying
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.11.2024
Témata:
ISSN:2162-237X, 2162-2388, 2162-2388
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Keyphrase generation is one of the most fundamental tasks in natural language processing (NLP). Most existing works on keyphrase generation mainly focus on using holistic distribution to optimize the negative log-likelihood loss, but they do not directly manipulate the copy and generating spaces, which may reduce the generability of the decoder. Additionally, existing keyphrase models are either unable to determine the dynamic numbers of keyphrases or produce the number of keyphrases implicitly. In this article, we propose a probabilistic keyphrase generation model from copy and generating spaces. The proposed model is built upon the vanilla variational encoder-decoder (VED) framework. On top of VED, two separate latent variables are adopted to model the distribution of data within the latent copy and generating spaces, respectively. Specifically, we adopt a von Mises-Fisher (vMF) distribution to obtain a condensed variable for modifying the generating probability distribution over the predefined vocabulary. Meanwhile, we utilize a clustering module, which is designed to promote Gaussian Mixture learning and subsequently extract a latent variable for the copy probability distribution. Moreover, we utilize a natural property of the Gaussian mixture network and use the number of filtered components to determine the number of keyphrases. The approach is trained based on latent variable probabilistic modeling, neural variational inference, and self-supervised learning. Experiments on social media and scientific article datasets outperform the state-of-the-art baselines in generating accurate predictions and controllable keyphrase numbers.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2023.3290789