Semantic and Instance-Aware Pixel-Adaptive Convolution for Panoptic Segmentation

Although the weight-sharing property of convolution is one of the major reasons for the success of convolution neural networks, the content-agnostic operation is insufficient for several tasks requiring content-adaptive processing, including panoptic segmentation. Inspired by several recent works on...

Full description

Saved in:
Bibliographic Details
Published in:2023 IEEE International Conference on Image Processing (ICIP) pp. 16 - 20
Main Authors: Song, Sumin, Sagong, Min-Cheol, Jung, Seung-Won, Ko, Sung-Jea
Format: Conference Proceeding
Language:English
Published: IEEE 08.10.2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although the weight-sharing property of convolution is one of the major reasons for the success of convolution neural networks, the content-agnostic operation is insufficient for several tasks requiring content-adaptive processing, including panoptic segmentation. Inspired by several recent works on content-adaptive convolutions, we introduce the GuidedPAKA, the first content-adaptive convolution method specialized for panoptic segmentation. Specifically, GuidedPAKA learns the pixel-adaptive kernel attention consisting of the channel and spatial kernel attentions. Instead of commonly used self-attention operation, we guide the channel and spatial kernel attentions using their respective supervision signals, i.e., semantic segmentation maps and local instance affinities. Consequently, these kernel attentions extract features helpful for panoptic segmentation. Experimental results show that the proposed GuidedPAKA improves the performance of panoptic segmentation when integrated into the baseline model.
DOI:10.1109/ICIP49359.2023.10222515