Laplacian networks: bounding indicator function smoothness for neural networks robustness

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, a...

Full description

Saved in:
Bibliographic Details
Published in:APSIPA transactions on signal and information processing Vol. 10; no. 1
Main Authors: Lassance, Carlos, Gripon, Vincent, Ortega, Antonio
Format: Journal Article
Language:English
Published: Cambridge, UK Cambridge University Press 2021
Subjects:
ISSN:2048-7703, 2048-7703
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.
ISSN:2048-7703
2048-7703
DOI:10.1017/ATSIP.2021.2