FAMINet: Learning Real-Time Semisupervised Video Object Segmentation With Steepest Optimized Optical Flow
Semisupervised video object segmentation (VOS) aims to segment a few moving objects in a video sequence, where these objects are specified by annotation of the first frame. The optical flow has been considered in many existing semisupervised VOS methods to improve the segmentation accuracy. However,...
Uložené v:
| Vydané v: | IEEE transactions on instrumentation and measurement Ročník 71; s. 1 - 16 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
New York
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 0018-9456, 1557-9662 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Semisupervised video object segmentation (VOS) aims to segment a few moving objects in a video sequence, where these objects are specified by annotation of the first frame. The optical flow has been considered in many existing semisupervised VOS methods to improve the segmentation accuracy. However, the optical flow-based semisupervised VOS methods cannot run in real time due to high complexity of optical flow estimation. A FAMINet, which consists of a feature extraction network (F), an appearance network (A), a motion network (M), and an integration network (I), is proposed in this study to address the above-mentioned problem. The appearance network outputs an initial segmentation result based on static appearances of objects. The motion network estimates the optical flow via very few parameters, which are optimized rapidly by an online memorizing algorithm named relaxed steepest descent. The integration network refines the initial segmentation result using the optical flow. Extensive experiments demonstrate that the FAMINet outperforms other state-of-the-art semisupervised VOS methods on the DAVIS and YouTube-VOS benchmarks and achieves a good trade-off between accuracy and efficiency. Our code is available at https://github.com/liuziyang123/FAMINet . |
|---|---|
| Bibliografia: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0018-9456 1557-9662 |
| DOI: | 10.1109/TIM.2021.3133003 |