Contextual Hashing for Large-Scale Image Search

With the explosive growth of the multimedia data on the Web, content-based image search has attracted considerable attentions in the multimedia and the computer vision community. The most popular approach is based on the bag-of-visual-words model with invariant local features. Since the spatial cont...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on image processing Ročník 23; číslo 4; s. 1606 - 1614
Hlavní autori: Liu, Zhen, Li, Houqiang, Zhou, Wengang, Zhao, Ruizhen, Tian, Qi
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York, NY IEEE 01.04.2014
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1057-7149, 1941-0042, 1941-0042
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:With the explosive growth of the multimedia data on the Web, content-based image search has attracted considerable attentions in the multimedia and the computer vision community. The most popular approach is based on the bag-of-visual-words model with invariant local features. Since the spatial context information among local features is critical for visual content identification, many methods exploit the geometric clues of local features, including the location, the scale, and the orientation, for explicitly post-geometric verification. However, usually only a few initially top-ranked results are geometrically verified, considering the high computational cost in full geometric verification. In this paper, we propose to represent the spatial context of local features into binary codes, and implicitly achieve geometric verification by efficient comparison of the binary codes. Besides, we explore the multimode property of local features to further boost the retrieval performance. Experiments on holidays, Paris, and Oxford building benchmark data sets demonstrate the effectiveness of the proposed algorithm.
Bibliografia:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2014.2305072