Online Rain/Snow Removal from Surveillance Videos

Video rain/snow removal from surveillance videos is an important task in the computer vision community since rain/snow existed in videos can severely degenerate the performance of many surveillance system. Various methods have been investigated extensively, but most only consider consistent rain/sno...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on image processing Ročník 30; s. 1
Hlavní autoři: Li, Minghan, Cao, Xiangyong, Zhao, Qian, Zhang, Lei, Meng, Deyu
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.01.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1057-7149, 1941-0042, 1941-0042
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Video rain/snow removal from surveillance videos is an important task in the computer vision community since rain/snow existed in videos can severely degenerate the performance of many surveillance system. Various methods have been investigated extensively, but most only consider consistent rain/snow under stable background scenes. Rain/snow captured from practical surveillance camera, however, is always highly dynamic in time, and those videos also include occasionally transformed background scenes and background motions caused by waving leaves or water surfaces. To this issue, this paper proposes a novel rain/snow removal approach, which fully considers dynamic statistics of both rain/snow and background scenes taken from a video sequence. Specifically, the rain/snow is encoded as an online multi-scale convolutional sparse coding (OMS-CSC) model, which not only finely delivers the sparse scattering and multi-scale shapes of real rain/snow, but also well distinguish the components of background motion from rain/snow layer. The real-time ameliorated parameters in the model well encodes their temporally dynamic configurations. Furthermore, a transformation operator imposed on the background scenes is further embedded into the proposed model, which finely conveys the background transformations, such as rotations, scalings and distortions, inevitably existed in a real video sequence. The approach so constructed can naturally better adapt to the dynamic rain/snow as well as background changes, and also suitable to deal with the streaming video attributed its online learning mode. The proposed model is formulated in a concise maximum a posterior (MAP) framework and is readily solved by the alternating direction method of multipliers (ADMM) algorithm. Compared with the state-of-the-art online and offline video rain/snow removal methods, the proposed method achieves best performance on synthetic and real videos datasets both visually and quantitatively. Specifically, our method can be implemented in relatively high efficiency, showing its potential to real-time video rain/snow removal. The code page is at: https://github.com/MinghanLi/OTMSCSC matlab 2020.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2021.3050313