Bandit Learning for Diversified Interactive Recommendation

Gespeichert in:
Bibliographische Detailangaben
Titel: Bandit Learning for Diversified Interactive Recommendation
Autoren: Liu, Yong, Xiao, Yingtai, Wu, Qiong, Miao, Chunyan, Zhang, Juyong
Verlagsinformationen: 2019-06-30
Publikationsart: Electronic Resource
Abstract: Interactive recommender systems that enable the interactions between users and the recommender system have attracted increasing research attentions. Previous methods mainly focus on optimizing recommendation accuracy. However, they usually ignore the diversity of the recommendation results, thus usually results in unsatisfying user experiences. In this paper, we propose a novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC$^2$B), for interactive recommendation with users' implicit feedback. Specifically, DC$^2$B employs determinantal point process in the recommendation procedure to promote diversity of the recommendation results. To learn the model parameters, a Thompson sampling-type algorithm based on variational Bayesian inference is proposed. In addition, theoretical regret analysis is also provided to guarantee the performance of DC$^2$B. Extensive experiments on real datasets are performed to demonstrate the effectiveness of the proposed method.
Index Begriffe: Computer Science - Information Retrieval, Computer Science - Machine Learning, Statistics - Machine Learning, text
URL: http://arxiv.org/abs/1907.01647
Verfügbarkeit: Open access content. Open access content
Other Numbers: COO oai:arXiv.org:1907.01647
1228355261
Originalquelle: CORNELL UNIV
From OAIster®, provided by the OCLC Cooperative.
Dokumentencode: edsoai.on1228355261
Datenbank: OAIster
Beschreibung
Abstract:Interactive recommender systems that enable the interactions between users and the recommender system have attracted increasing research attentions. Previous methods mainly focus on optimizing recommendation accuracy. However, they usually ignore the diversity of the recommendation results, thus usually results in unsatisfying user experiences. In this paper, we propose a novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC$^2$B), for interactive recommendation with users' implicit feedback. Specifically, DC$^2$B employs determinantal point process in the recommendation procedure to promote diversity of the recommendation results. To learn the model parameters, a Thompson sampling-type algorithm based on variational Bayesian inference is proposed. In addition, theoretical regret analysis is also provided to guarantee the performance of DC$^2$B. Extensive experiments on real datasets are performed to demonstrate the effectiveness of the proposed method.