YOLACT++ Better Real-Time Instance Segmentation

We present a simple, fully-convolutional model for real-time (<inline-formula><tex-math notation="LaTeX">>30</tex-math> <mml:math><mml:mrow><mml:mo>></mml:mo><mml:mn>30</mml:mn></mml:mrow></mml:math><inline-graphic...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence Vol. 44; no. 2; pp. 1108 - 1121
Main Authors: Bolya, Daniel, Zhou, Chong, Xiao, Fanyi, Lee, Yong Jae
Format: Journal Article
Language:English
Published: United States IEEE 01.02.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We present a simple, fully-convolutional model for real-time (<inline-formula><tex-math notation="LaTeX">>30</tex-math> <mml:math><mml:mrow><mml:mo>></mml:mo><mml:mn>30</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="zhou-ieq1-3014297.gif"/> </inline-formula> fps) instance segmentation that achieves competitive results on MS COCO evaluated on a single Titan Xp, which is significantly faster than any previous state-of-the-art approach. Moreover, we obtain this result after training on only one GPU . We accomplish this by breaking instance segmentation into two parallel subtasks: (1) generating a set of prototype masks and (2) predicting per-instance mask coefficients. Then we produce instance masks by linearly combining the prototypes with the mask coefficients. We find that because this process doesn't depend on repooling, this approach produces very high-quality masks and exhibits temporal stability for free. Furthermore, we analyze the emergent behavior of our prototypes and show they learn to localize instances on their own in a translation variant manner, despite being fully-convolutional. We also propose Fast NMS, a drop-in 12 ms faster replacement for standard NMS that only has a marginal performance penalty. Finally, by incorporating deformable convolutions into the backbone network, optimizing the prediction head with better anchor scales and aspect ratios, and adding a novel fast mask re-scoring branch, our YOLACT++ model can achieve 34.1 mAP on MS COCO at 33.5 fps, which is fairly close to the state-of-the-art approaches while still running at real-time.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
1939-3539
DOI:10.1109/TPAMI.2020.3014297