Contextual Hashing for Large-Scale Image Search

With the explosive growth of the multimedia data on the Web, content-based image search has attracted considerable attentions in the multimedia and the computer vision community. The most popular approach is based on the bag-of-visual-words model with invariant local features. Since the spatial cont...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 23; no. 4; pp. 1606 - 1614
Main Authors: Liu, Zhen, Li, Houqiang, Zhou, Wengang, Zhao, Ruizhen, Tian, Qi
Format: Journal Article
Language:English
Published: New York, NY IEEE 01.04.2014
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1057-7149, 1941-0042, 1941-0042
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the explosive growth of the multimedia data on the Web, content-based image search has attracted considerable attentions in the multimedia and the computer vision community. The most popular approach is based on the bag-of-visual-words model with invariant local features. Since the spatial context information among local features is critical for visual content identification, many methods exploit the geometric clues of local features, including the location, the scale, and the orientation, for explicitly post-geometric verification. However, usually only a few initially top-ranked results are geometrically verified, considering the high computational cost in full geometric verification. In this paper, we propose to represent the spatial context of local features into binary codes, and implicitly achieve geometric verification by efficient comparison of the binary codes. Besides, we explore the multimode property of local features to further boost the retrieval performance. Experiments on holidays, Paris, and Oxford building benchmark data sets demonstrate the effectiveness of the proposed algorithm.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2014.2305072