Adversarial classification via distributional robustness with Wasserstein ambiguity

We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to adversarial classification models prop...

Full description

Saved in:
Bibliographic Details
Published in:Mathematical programming Vol. 198; no. 2; pp. 1411 - 1447
Main Authors: Ho-Nguyen, Nam, Wright, Stephen J.
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.04.2023
Springer
Subjects:
ISSN:0025-5610, 1436-4646
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to adversarial classification models proposed earlier and to maximum-margin classifiers. We also provide a reformulation of the distributionally robust model for linear classification, and show it is equivalent to minimizing a regularized ramp loss objective. Numerical experiments show that, despite the nonconvexity of this formulation, standard descent methods appear to converge to the global minimizer for this problem. Inspired by this observation, we show that, for a certain class of distributions, the only stationary point of the regularized ramp loss minimization problem is the global minimizer.
ISSN:0025-5610
1436-4646
DOI:10.1007/s10107-022-01796-6