Semantic and Instance-Aware Pixel-Adaptive Convolution for Panoptic Segmentation

Although the weight-sharing property of convolution is one of the major reasons for the success of convolution neural networks, the content-agnostic operation is insufficient for several tasks requiring content-adaptive processing, including panoptic segmentation. Inspired by several recent works on...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2023 IEEE International Conference on Image Processing (ICIP) s. 16 - 20
Hlavní autori: Song, Sumin, Sagong, Min-Cheol, Jung, Seung-Won, Ko, Sung-Jea
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 08.10.2023
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Although the weight-sharing property of convolution is one of the major reasons for the success of convolution neural networks, the content-agnostic operation is insufficient for several tasks requiring content-adaptive processing, including panoptic segmentation. Inspired by several recent works on content-adaptive convolutions, we introduce the GuidedPAKA, the first content-adaptive convolution method specialized for panoptic segmentation. Specifically, GuidedPAKA learns the pixel-adaptive kernel attention consisting of the channel and spatial kernel attentions. Instead of commonly used self-attention operation, we guide the channel and spatial kernel attentions using their respective supervision signals, i.e., semantic segmentation maps and local instance affinities. Consequently, these kernel attentions extract features helpful for panoptic segmentation. Experimental results show that the proposed GuidedPAKA improves the performance of panoptic segmentation when integrated into the baseline model.
DOI:10.1109/ICIP49359.2023.10222515