A proportional-integral-derivative-incorporated stochastic gradient descent-based latent factor analysis model
Large-scale relationships like user-item preferences in a recommender system are mostly described by a high-dimensional and sparse (HiDS) matrix. A latent factor analysis (LFA) model extracts useful knowledge from an HiDS matrix efficiently, where stochastic gradient descent (SGD) is frequently adop...
Saved in:
| Published in: | Neurocomputing (Amsterdam) Vol. 427; pp. 29 - 39 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier B.V
28.02.2021
|
| Subjects: | |
| ISSN: | 0925-2312, 1872-8286 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Large-scale relationships like user-item preferences in a recommender system are mostly described by a high-dimensional and sparse (HiDS) matrix. A latent factor analysis (LFA) model extracts useful knowledge from an HiDS matrix efficiently, where stochastic gradient descent (SGD) is frequently adopted as the learning algorithm. However, a standard SGD algorithm updates a decision parameter with the stochastic gradient on the instant loss only, without considering information described by prior updates. Hence, an SGD-based LFA model commonly consumes many iterations to converge, which greatly affects its practicability. On the other hand, a proportional-integral-derivative (PID) controller makes a learning model converge fast with the consideration of its historical errors from the initial state till the current moment. Motivated by this discovery, this paper proposes a PID-incorporated SGD-based LFA (PSL) model. Its main idea is to rebuild the instant error on a single instance following the principle of PID, and then substitute this rebuilt error into an SGD algorithm for accelerating model convergence. Empirical studies on six widely-accepted HiDS matrices indicate that compared with state-of-the-art LFA models, a PSL model achieves significantly higher computational efficiency as well as highly competitive prediction accuracy for missing data of an HiDS matrix. |
|---|---|
| ISSN: | 0925-2312 1872-8286 |
| DOI: | 10.1016/j.neucom.2020.11.029 |