Distributed kernel learning using Kernel Recursive Least Squares

Constructing accurate models that represent the underlying structure of Big Data is a costly process that usually constitutes a compromise between computation time and model accuracy. Methods addressing these issues often employ parallelisation to handle processing. Many of these methods target the...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) s. 5500 - 5504
Hlavní autoři: Fraser, Nicholas J., Moss, Duncan J. M., Epain, Nicolas, Leong, Philip H. W.
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.04.2015
Témata:
ISSN:1520-6149
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Constructing accurate models that represent the underlying structure of Big Data is a costly process that usually constitutes a compromise between computation time and model accuracy. Methods addressing these issues often employ parallelisation to handle processing. Many of these methods target the Support Vector Machine (SVM) and provide a significant speed up over batch approaches. However, the convergence of these methods often rely on multiple passes through the data. In this paper, we present a parallelised algorithm that constructs a model equivalent to a serial approach, whilst requiring only a single pass of the data. We first employ the Kernel Recursive Least Squares (KRLS) algorithm to construct several models from subsets of the overall data. We then show that these models can be combined using KRLS to create a single compact model. Our parallelised KRLS methodology significantly improves execution time and demonstrates comparable accuracy when compared to the parallel and serial SVM approaches.
ISSN:1520-6149
DOI:10.1109/ICASSP.2015.7179023