Utilising semantic technologies for intelligent indexing and retrieval of digital images

The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing a colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computing Ročník 96; číslo 7; s. 651 - 668
Hlavní autori: Osman, Taha, Thakker, Dhavalkumar, Schaefer, Gerald
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Vienna Springer Vienna 01.07.2014
Springer Nature B.V
Predmet:
ISSN:0010-485X, 1436-5057
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing a colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they in principle rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this paper we present a semantically-enabled image annotation and retrieval engine that is designed to satisfy the requirements of the commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as the exploitation of lexical databases for explicit semantic-based query expansion.
Bibliografia:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0010-485X
1436-5057
DOI:10.1007/s00607-013-0350-0