Reinforcement Learning for Visual Object Detection

One of the most widely used strategies for visual object detection is based on exhaustive spatial hypothesis search. While methods like sliding windows have been successful and effective for many years, they are still brute-force, independent of the image content and the visual category being search...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Ročník 2016-January; s. 2894 - 2902
Hlavní autori: Mathe, Stefan, Pirinen, Aleksis, Sminchisescu, Cristian
Médium: Konferenčný príspevok.. Kapitola
Jazyk:English
Vydavateľské údaje: IEEE 01.06.2016
Predmet:
ISBN:9781467388511, 1467388513
ISSN:1063-6919
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:One of the most widely used strategies for visual object detection is based on exhaustive spatial hypothesis search. While methods like sliding windows have been successful and effective for many years, they are still brute-force, independent of the image content and the visual category being searched. In this paper we present principled sequential models that accumulate evidence collected at a small set of image locations in order to detect visual objects effectively. By formulating sequential search as reinforcement learning of the search policy (including the stopping condition), our fully trainable model can explicitly balance for each class, specifically, the conflicting goals of exploration - sampling more image regions for better accuracy -, and exploitation - stopping the search efficiently when sufficiently confident about the target's location. The methodology is general and applicable to any detector response function. We report encouraging results in the PASCAL VOC 2012 object detection test set showing that the proposed methodology achieves almost two orders of magnitude speed-up over sliding window methods.
ISBN:9781467388511
1467388513
ISSN:1063-6919
DOI:10.1109/CVPR.2016.316