Mining user hidden semantics from image content for image retrieval

Xiangjun Shen, Shiguang Ju, Siu Yeung Cho, Feng Li

Research output: Journal PublicationArticlepeer-review

6 Citations (Scopus)

Abstract

The problem confronted in the content-based image retrieval research is the semantic gap between the low-level feature representing and high-level semantics in the images. This paper describes a way to bridge such gap: by learning the similar images given from the user, the system extracts the similar region pairs and classifies those similar region pairs either as object or non-object semantics, and either as object-relation or non-object-relation semantics automatically, which are obtained from comparing the distances and spatial relationships in the similar region pairs by themselves. The system also extracts interesting parts of the features from the similar region pair and then adjusts each interesting feature and region pair weight dynamically. Using those objects and object-relation semantics as well as the dynamic weights adjustment from the similar images, the semantics of those similar images can be mined and used for searching the similar images. The experiments show that the proposed system can retrieve the similar images well and efficient.

Original languageEnglish
Pages (from-to)145-164
Number of pages20
JournalJournal of Visual Communication and Image Representation
Volume19
Issue number3
DOIs
Publication statusPublished - Apr 2008
Externally publishedYes

Keywords

  • Content-based image retrieval
  • Dynamical feature selection
  • Dynamical weight adjusting
  • Object semantics
  • Object-relation semantics
  • Similar region pair matching

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Mining user hidden semantics from image content for image retrieval'. Together they form a unique fingerprint.

Cite this