Improving Out-of-Distribution Detection with Disentangled Foreground and Background Features

Saved in:
Bibliographic Details
Title: Improving Out-of-Distribution Detection with Disentangled Foreground and Background Features
Authors: Choubo Ding, Guansong Pang
Source: Proceedings of the 32nd ACM International Conference on Multimedia. :8923-8931
Publication Status: Preprint
Publisher Information: ACM, 2024.
Publication Year: 2024
Subject Terms: FOS: Computer and information sciences, Out-of-Distribution detection, Artificial Intelligence and Robotics, Computer Vision and Pattern Recognition (cs.CV), Disentangled representations, Computer Science - Computer Vision and Pattern Recognition, Anomaly detection, 02 engineering and technology, Machine learning, Graphics and Human Computer Interfaces, 0202 electrical engineering, electronic engineering, information engineering, Computer vision, Image representation
Description: Detecting out-of-distribution (OOD) inputs is a principal task for ensuring the safety of deploying deep-neural-network classifiers in open-set scenarios. OOD samples can be drawn from arbitrary distributions and exhibit deviations from in-distribution (ID) data in various dimensions, such as foreground features (e.g., objects in CIFAR100 images vs. those in CIFAR10 images) and background features (e.g., textural images vs. objects in CIFAR10). Existing methods can confound foreground and background features in training, failing to utilize the background features for OOD detection. This paper considers the importance of feature disentanglement in out-of-distribution detection and proposes the simultaneous exploitation of both foreground and background features to support the detection of OOD inputs in in out-of-distribution detection. To this end, we propose a novel framework that first disentangles foreground and background features from ID training samples via a dense prediction approach, and then learns a new classifier that can evaluate the OOD scores of test images from both foreground and background features. It is a generic framework that allows for a seamless combination with various existing OOD detection methods. Extensive experiments show that our approach 1) can substantially enhance the performance of four different state-of-the-art (SotA) OOD detection methods on multiple widely-used OOD datasets with diverse background features, and 2) achieves new SotA performance on these benchmarks.
Accepted by ACM MM 2024, 9 pages
Document Type: Article
File Description: application/pdf
DOI: 10.1145/3664647.3681614
DOI: 10.48550/arxiv.2303.08727
Access URL: http://arxiv.org/abs/2303.08727
Rights: CC BY
arXiv Non-Exclusive Distribution
CC BY NC ND
Accession Number: edsair.doi.dedup.....78d76fa4e66e8cb6c2f394dbbe465743
Database: OpenAIRE
Description
Abstract:Detecting out-of-distribution (OOD) inputs is a principal task for ensuring the safety of deploying deep-neural-network classifiers in open-set scenarios. OOD samples can be drawn from arbitrary distributions and exhibit deviations from in-distribution (ID) data in various dimensions, such as foreground features (e.g., objects in CIFAR100 images vs. those in CIFAR10 images) and background features (e.g., textural images vs. objects in CIFAR10). Existing methods can confound foreground and background features in training, failing to utilize the background features for OOD detection. This paper considers the importance of feature disentanglement in out-of-distribution detection and proposes the simultaneous exploitation of both foreground and background features to support the detection of OOD inputs in in out-of-distribution detection. To this end, we propose a novel framework that first disentangles foreground and background features from ID training samples via a dense prediction approach, and then learns a new classifier that can evaluate the OOD scores of test images from both foreground and background features. It is a generic framework that allows for a seamless combination with various existing OOD detection methods. Extensive experiments show that our approach 1) can substantially enhance the performance of four different state-of-the-art (SotA) OOD detection methods on multiple widely-used OOD datasets with diverse background features, and 2) achieves new SotA performance on these benchmarks.<br />Accepted by ACM MM 2024, 9 pages
DOI:10.1145/3664647.3681614