Context-aware ranking refinement with attentive semi-supervised autoencoders

Learning to rank methods aim to learn a refined ranking model from labeled data for desired ranking performance. However, the learned model may not improve the performance on each individual query because the distributions of relevant documents among queries are diversified in document feature space...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Soft computing (Berlin, Germany) Ročník 26; číslo 24; s. 13941 - 13952
Hlavní autoři: Xu, Bo, Lin, Hongfei, Lin, Yuan, Xu, Kan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Berlin/Heidelberg Springer Berlin Heidelberg 01.12.2022
Témata:
ISSN:1432-7643, 1433-7479
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Learning to rank methods aim to learn a refined ranking model from labeled data for desired ranking performance. However, the learned model may not improve the performance on each individual query because the distributions of relevant documents among queries are diversified in document feature space. The performance of learned ranking models may be largely affected by the usefulness of document features. To generate high-quality document ranking features, we capture the local context information of individual queries from the top-ranked documents of an initial retrieval using pseudo-relevance feedback. Based on the top-ranked feedback documents, we propose an attentive semi-supervised autoencoder to refine the ranked results using an optimized ranking-oriented reconstruction loss. Furthermore, we devise the hybrid listwise query constraints to capture the characteristics of relevant documents for different queries. We evaluate the proposed ranking model on LETOR collections including OHSUMED, MQ2007 and MQ2008. Our model produces better experimental results and consistent improvements of ranking performance over baseline methods.
ISSN:1432-7643
1433-7479
DOI:10.1007/s00500-022-07433-w