Video anomaly detection and localization via multivariate gaussian fully convolution adversarial autoencoder

In this paper, we present a novel deep learning based method for video anomaly detection and localization. The key idea of our approach is that the latent space representations of normal samples are trained to accord with a specific prior distribution by the proposed deep neural network - Multivaria...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Neurocomputing (Amsterdam) Ročník 369; s. 92 - 105
Hlavní autori: Li, Nanjun, Chang, Faliang
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier B.V 05.12.2019
Predmet:
ISSN:0925-2312, 1872-8286
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this paper, we present a novel deep learning based method for video anomaly detection and localization. The key idea of our approach is that the latent space representations of normal samples are trained to accord with a specific prior distribution by the proposed deep neural network - Multivariate Gaussian Fully Convolution Adversarial Autoencoder (MGFC-AAE), while the latent representations of anomalies do not. In order to extract deep features from input samples as latent representations, a convolutional neural network (CNN) is employed for the encoder of the deep network. Based on the probability that the test sample is associated with the prior distribution, an energy-based method is applied to obtain its anomaly score. A two-stream framework is utilized to integrate the appearance and motion cues to achieve more comprehensive detection results, taking the gradient and optical flow patches as inputs for each stream. Besides, a multi-scale patch structure is put forward to handle the perspective of some video scenes. Experiments are conducted on three public datasets, results verify that our framework can accurately detect and locate abnormal objects in various video scenes, achieving competitive performance when compared with other state-of-the-art works.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2019.08.044