Semantic and Instance-Aware Pixel-Adaptive Convolution for Panoptic Segmentation

Although the weight-sharing property of convolution is one of the major reasons for the success of convolution neural networks, the content-agnostic operation is insufficient for several tasks requiring content-adaptive processing, including panoptic segmentation. Inspired by several recent works on...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 IEEE International Conference on Image Processing (ICIP) s. 16 - 20
Hlavní autoři: Song, Sumin, Sagong, Min-Cheol, Jung, Seung-Won, Ko, Sung-Jea
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 08.10.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Although the weight-sharing property of convolution is one of the major reasons for the success of convolution neural networks, the content-agnostic operation is insufficient for several tasks requiring content-adaptive processing, including panoptic segmentation. Inspired by several recent works on content-adaptive convolutions, we introduce the GuidedPAKA, the first content-adaptive convolution method specialized for panoptic segmentation. Specifically, GuidedPAKA learns the pixel-adaptive kernel attention consisting of the channel and spatial kernel attentions. Instead of commonly used self-attention operation, we guide the channel and spatial kernel attentions using their respective supervision signals, i.e., semantic segmentation maps and local instance affinities. Consequently, these kernel attentions extract features helpful for panoptic segmentation. Experimental results show that the proposed GuidedPAKA improves the performance of panoptic segmentation when integrated into the baseline model.
DOI:10.1109/ICIP49359.2023.10222515