Distributed kernel learning using Kernel Recursive Least Squares

Constructing accurate models that represent the underlying structure of Big Data is a costly process that usually constitutes a compromise between computation time and model accuracy. Methods addressing these issues often employ parallelisation to handle processing. Many of these methods target the...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) s. 5500 - 5504
Hlavní autori: Fraser, Nicholas J., Moss, Duncan J. M., Epain, Nicolas, Leong, Philip H. W.
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.04.2015
Predmet:
ISSN:1520-6149
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Constructing accurate models that represent the underlying structure of Big Data is a costly process that usually constitutes a compromise between computation time and model accuracy. Methods addressing these issues often employ parallelisation to handle processing. Many of these methods target the Support Vector Machine (SVM) and provide a significant speed up over batch approaches. However, the convergence of these methods often rely on multiple passes through the data. In this paper, we present a parallelised algorithm that constructs a model equivalent to a serial approach, whilst requiring only a single pass of the data. We first employ the Kernel Recursive Least Squares (KRLS) algorithm to construct several models from subsets of the overall data. We then show that these models can be combined using KRLS to create a single compact model. Our parallelised KRLS methodology significantly improves execution time and demonstrates comparable accuracy when compared to the parallel and serial SVM approaches.
ISSN:1520-6149
DOI:10.1109/ICASSP.2015.7179023