Linear Approximation of Deep Neural Networks for Efficient Inference on Video Data

Sequential data such as video are characterized by spatio-temporal correlations. As of yet, few deep learning algorithms exploit them to decrease the often massive cost during inference. This work leverages correlations in video data to linearize part of a deep neural network and thus reduce its siz...

Full description

Saved in:
Bibliographic Details
Published in:2019 27th European Signal Processing Conference (EUSIPCO) pp. 1 - 5
Main Authors: Rueckauer, Bodo, Liu, Shih-Chii
Format: Conference Proceeding
Language:English
Published: EURASIP 01.09.2019
Subjects:
ISSN:2076-1465
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sequential data such as video are characterized by spatio-temporal correlations. As of yet, few deep learning algorithms exploit them to decrease the often massive cost during inference. This work leverages correlations in video data to linearize part of a deep neural network and thus reduce its size and computational cost. Drawing upon the simplicity of the typically used rectifier activation function, we replace the ReLU function by dynamically updating masks. The resulting layer stack is a simple chain of matrix multiplications and bias additions, that can be contracted into a single weight matrix and bias vector. Inference then reduces to an affine transformation of the input sequence with these contracted parameters. We show that the method is akin to approximating the neural network with a first-order Taylor expansion around a dynamically updating reference point. The proposed algorithm is evaluated on a denoising convolutional autoencoder.
ISSN:2076-1465
DOI:10.23919/EUSIPCO.2019.8902997