FAMINet: Learning Real-Time Semisupervised Video Object Segmentation With Steepest Optimized Optical Flow

Semisupervised video object segmentation (VOS) aims to segment a few moving objects in a video sequence, where these objects are specified by annotation of the first frame. The optical flow has been considered in many existing semisupervised VOS methods to improve the segmentation accuracy. However,...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement Vol. 71; pp. 1 - 16
Main Authors: Liu, Ziyang, Liu, Jingmeng, Chen, Weihai, Wu, Xingming, Li, Zhengguo
Format: Journal Article
Language:English
Published: New York IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0018-9456, 1557-9662
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Semisupervised video object segmentation (VOS) aims to segment a few moving objects in a video sequence, where these objects are specified by annotation of the first frame. The optical flow has been considered in many existing semisupervised VOS methods to improve the segmentation accuracy. However, the optical flow-based semisupervised VOS methods cannot run in real time due to high complexity of optical flow estimation. A FAMINet, which consists of a feature extraction network (F), an appearance network (A), a motion network (M), and an integration network (I), is proposed in this study to address the above-mentioned problem. The appearance network outputs an initial segmentation result based on static appearances of objects. The motion network estimates the optical flow via very few parameters, which are optimized rapidly by an online memorizing algorithm named relaxed steepest descent. The integration network refines the initial segmentation result using the optical flow. Extensive experiments demonstrate that the FAMINet outperforms other state-of-the-art semisupervised VOS methods on the DAVIS and YouTube-VOS benchmarks and achieves a good trade-off between accuracy and efficiency. Our code is available at https://github.com/liuziyang123/FAMINet .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2021.3133003