Moving object detection using statistical background subtraction in wavelet compressed domain

Moving object detection is a fundamental task and extensively used research area in modern world computer vision applications. Background subtraction is one of the widely used and the most efficient technique for it, which generates the initial background using different statistical parameters. Due...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia tools and applications Vol. 79; no. 9-10; pp. 5919 - 5940
Main Authors: Sengar, Sandeep Singh, Mukhopadhyay, Susanta
Format: Journal Article
Language:English
Published: New York Springer US 01.03.2020
Springer Nature B.V
Subjects:
ISSN:1380-7501, 1573-7721
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Moving object detection is a fundamental task and extensively used research area in modern world computer vision applications. Background subtraction is one of the widely used and the most efficient technique for it, which generates the initial background using different statistical parameters. Due to the enormous size of the video data, the segmentation process requires considerable amount of memory space and time. To reduce the above shortcomings, we propose a statistical background subtraction based motion segmentation method in a compressed transformed domain employing wavelet. We employ the weighted-mean and weighted-variance based background subtraction operations only on the detailed components of the wavelet transformed frame to reduce the computational complexity. Here, weight for each pixel location is computed using pixel-wise median operation between the successive frames. To detect the foreground objects, we employ adaptive threshold, the value of which is selected based on different statistical parameters. Finally, morphological operation, connected component analysis, and flood-fill algorithm are applied to efficiently and accurately detect the foreground objects. Our method is conceived, implemented, and tested on different real video sequences and experimental results show that the performance of our method is reasonably better compared to few other existing approaches.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-019-08506-z