Cross-domain recommendation via knowledge distillation
Recommendation systems frequently suffer from data sparsity, resulting in less-than-ideal recommendations. A prominent solution to this problem is Cross-Domain Recommendation (CDR), which employs data from various domains to mitigate data sparsity and cold-start issues. Nevertheless, current mainstr...
Uložené v:
| Vydané v: | Knowledge-based systems Ročník 311; s. 113112 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Elsevier B.V
28.02.2025
|
| Predmet: | |
| ISSN: | 0950-7051 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Recommendation systems frequently suffer from data sparsity, resulting in less-than-ideal recommendations. A prominent solution to this problem is Cross-Domain Recommendation (CDR), which employs data from various domains to mitigate data sparsity and cold-start issues. Nevertheless, current mainstream methods, like feature mapping and co-training exploring domain relationships, overlook latent user–user and user–item similarities in the shared user–item interaction graph. Spurred by these deficiencies, this paper introduces KDCDR, a novel cross-domain recommendation framework that relies on knowledge distillation to utilize the data from the graph. KDCDR aims to improve the recommendation performance in both domains by efficiently utilizing information from the shared interaction graph. Furthermore, we enhance the effectiveness of user and item representations by exploring the relationships between user–user similarity and item–item similarity, as well as user–item interactions. The developed scheme utilizes the inner-domain graph as a teacher and the cross-domain graph as a student, where the student learns by distilling knowledge from the two teachers after undergoing a high-temperature distillation process. Furthermore, we introduce dynamic weight that regulates the learning process to prevent the student network from overly favoring learning from one domain and focusing on learning knowledge that the teachers have taught incorrectly. Through extensive experiments on four real-world datasets, KDCDR demonstrates significant improvements over state-of-the-art methods, proving the effectiveness of KDCDR in addressing data sparsity issues and enhancing cross-domain recommendation performance. Our code and data are available at https://github.com/pandas-bondage/KDCDR.
•Proposing a knowledge distillation based method to tackle the CDR problem.•Presenting a model to use dynamic weights to rectify teacher knowledge inaccuracies.•Designing a VGAE to structure superior user and item feature embeddings. |
|---|---|
| ISSN: | 0950-7051 |
| DOI: | 10.1016/j.knosys.2025.113112 |