Variational Bayesian learning for background subtraction based on local fusion feature

To resist the adverse effect of shadow interference, illumination changes, indigent texture and scenario jitter in object detection and improve performance, a background modelling method based on local fusion feature and variational Bayesian learning is proposed. First, U-LBSP (uniform-local binary...

Full description

Saved in:
Bibliographic Details
Published in:IET computer vision Vol. 10; no. 8; pp. 884 - 893
Main Authors: Yan, Junhua, Wang, Shunfei, Xie, Tianxia, Yang, Yong, Wang, Jiayi
Format: Journal Article
Language:English
Published: The Institution of Engineering and Technology 01.12.2016
Wiley
Subjects:
ISSN:1751-9632, 1751-9640, 1751-9640
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To resist the adverse effect of shadow interference, illumination changes, indigent texture and scenario jitter in object detection and improve performance, a background modelling method based on local fusion feature and variational Bayesian learning is proposed. First, U-LBSP (uniform-local binary similarity patterns) texture feature, lab colour and location feature are used to construct local fusion feature. U-LBSP is modified from local binary patterns in order to reduce computational complexity and better resist the influence of shadow and illumination changes. Joint colour and location feature are introduced to deal with the problem of indigent texture and scenario jitter. Then, LFGMM (Gaussian mixture model based on local fusion feature) is updated and learned by variational Bayes. In order to adapt to dynamic changing scenarios, the variational expectation maximisation algorithm is applied for distribution parameters optimisation. In this way, the optimal number of Gaussian components as well as their parameters can be automatically estimated with less time expended. Experimental results show that the authors’ method achieves outstanding detection performance especially under conditions of shadow disturbances, illumination changes, indigent texture and scenario jitter. Strong robustness and high accuracy have been achieved.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1751-9632
1751-9640
1751-9640
DOI:10.1049/iet-cvi.2016.0075