Large-Scale Live Active Learning: Training Object Detectors with Crawled Data and Crowds

Active learning and crowdsourcing are promising ways to efficiently build up training sets for object recognition, but thus far techniques are tested in artificially controlled settings. Typically the vision researcher has already determined the dataset’s scope, the labels “actively” obtained are in...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International journal of computer vision Ročník 108; číslo 1-2; s. 97 - 114
Hlavní autori: Vijayanarasimhan, Sudheendra, Grauman, Kristen
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Boston Springer US 01.05.2014
Springer
Springer Nature B.V
Predmet:
ISSN:0920-5691, 1573-1405
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Active learning and crowdsourcing are promising ways to efficiently build up training sets for object recognition, but thus far techniques are tested in artificially controlled settings. Typically the vision researcher has already determined the dataset’s scope, the labels “actively” obtained are in fact already known, and/or the crowd-sourced collection process is iteratively fine-tuned. We present an approach for live learning of object detectors, in which the system autonomously refines its models by actively requesting crowd-sourced annotations on images crawled from the Web. To address the technical issues such a large-scale system entails, we introduce a novel part-based detector amenable to linear classifiers, and show how to identify its most uncertain instances in sub-linear time with a hashing-based solution. We demonstrate the approach with experiments of unprecedented scale and autonomy, and show it successfully improves the state-of-the-art for the most challenging objects in the PASCAL VOC benchmark. In addition, we show our detector competes well with popular nonlinear classifiers that are much more expensive to train.
Bibliografia:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-2
content type line 23
ISSN:0920-5691
1573-1405
DOI:10.1007/s11263-014-0721-9