Reinforcement Learning for Visual Object Detection

One of the most widely used strategies for visual object detection is based on exhaustive spatial hypothesis search. While methods like sliding windows have been successful and effective for many years, they are still brute-force, independent of the image content and the visual category being search...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Ročník 2016-January; s. 2894 - 2902
Hlavní autoři: Mathe, Stefan, Pirinen, Aleksis, Sminchisescu, Cristian
Médium: Konferenční příspěvek Kapitola
Jazyk:angličtina
Vydáno: IEEE 01.06.2016
Témata:
ISBN:9781467388511, 1467388513
ISSN:1063-6919
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:One of the most widely used strategies for visual object detection is based on exhaustive spatial hypothesis search. While methods like sliding windows have been successful and effective for many years, they are still brute-force, independent of the image content and the visual category being searched. In this paper we present principled sequential models that accumulate evidence collected at a small set of image locations in order to detect visual objects effectively. By formulating sequential search as reinforcement learning of the search policy (including the stopping condition), our fully trainable model can explicitly balance for each class, specifically, the conflicting goals of exploration - sampling more image regions for better accuracy -, and exploitation - stopping the search efficiently when sufficiently confident about the target's location. The methodology is general and applicable to any detector response function. We report encouraging results in the PASCAL VOC 2012 object detection test set showing that the proposed methodology achieves almost two orders of magnitude speed-up over sliding window methods.
ISBN:9781467388511
1467388513
ISSN:1063-6919
DOI:10.1109/CVPR.2016.316