Distributed kernel learning using Kernel Recursive Least Squares

Constructing accurate models that represent the underlying structure of Big Data is a costly process that usually constitutes a compromise between computation time and model accuracy. Methods addressing these issues often employ parallelisation to handle processing. Many of these methods target the...

Full description

Saved in:
Bibliographic Details
Published in:2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 5500 - 5504
Main Authors: Fraser, Nicholas J., Moss, Duncan J. M., Epain, Nicolas, Leong, Philip H. W.
Format: Conference Proceeding
Language:English
Published: IEEE 01.04.2015
Subjects:
ISSN:1520-6149
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Constructing accurate models that represent the underlying structure of Big Data is a costly process that usually constitutes a compromise between computation time and model accuracy. Methods addressing these issues often employ parallelisation to handle processing. Many of these methods target the Support Vector Machine (SVM) and provide a significant speed up over batch approaches. However, the convergence of these methods often rely on multiple passes through the data. In this paper, we present a parallelised algorithm that constructs a model equivalent to a serial approach, whilst requiring only a single pass of the data. We first employ the Kernel Recursive Least Squares (KRLS) algorithm to construct several models from subsets of the overall data. We then show that these models can be combined using KRLS to create a single compact model. Our parallelised KRLS methodology significantly improves execution time and demonstrates comparable accuracy when compared to the parallel and serial SVM approaches.
ISSN:1520-6149
DOI:10.1109/ICASSP.2015.7179023