Looking Beyond Single Images for Weakly Supervised Semantic Segmentation Learning

This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision for semantic segmentation learning, and struggle to make the localization maps ca...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on pattern analysis and machine intelligence Ročník 46; číslo 3; s. 1635 - 1649
Hlavní autori: Wang, Wenguan, Sun, Guolei, Van Gool, Luc
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.03.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision for semantic segmentation learning, and struggle to make the localization maps capture more complete object content. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural co-attentions are incorporated into the classifier to complementarily capture cross-image semantic similarities and differences. In particular, given a pair of training images, one co-attention enforces the classifier to recognize the common semantics from co-attentive objects, while the other one, called contrastive co-attention, drives the classifier to identify the unique semantics from the rest, unshared objects. This helps the classifier discover more object patterns and better ground semantics in image regions. In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference, hence eventually benefiting semantic segmentation learning. More importantly, our algorithm provides a unified framework that handles well different WSSS settings, i.e., learning WSSS with 1) precise image-level supervision only, 2) extra simple single-label data, and 3) extra noisy web data. Without bells and whistles, it sets new state-of-the-arts on all these settings. Moreover, our approach ranked 1st place in the Weakly-Supervised Semantic Segmentation Track of CVPR2020 Learning from Imperfect Data Challenge. The extensive experimental results demonstrate well the efficacy and high utility of our method.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
1939-3539
DOI:10.1109/TPAMI.2022.3168530