Text detection in born-digital images using multiple layer images

Chao Zeng, Wenjing Jia, Xiangjian He

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

10 Citations (Scopus)

Abstract

In this paper, a new framework for detecting text from webpage and email images is presented. The original image is split into multiple layer images based on the maximum gradient difference (MGD) values to detect text with both strong and weak contrasts. Connected component processing and text detection are performed in each layer image. A novel texture descriptor named T-LBP, is proposed to further filter out non-text candidates with a trained SVM classifier. The ICDAR 2011 born-digital image dataset is used to evaluate and demonstrate the performance of the proposed method. Following the same performance evaluation criteria, the proposed method outperforms the winner algorithm of the ICDAR 2011 Robust Reading Competition Challenge 1.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2013 - Proceedings
Pages1947-1951
Number of pages5
DOIs
Publication statusPublished - 18 Oct 2013
Externally publishedYes
Event2013 38th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2013 - Vancouver, BC, Canada
Duration: 26 May 201331 May 2013

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Conference

Conference2013 38th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2013
Country/TerritoryCanada
CityVancouver, BC
Period26/05/1331/05/13

Keywords

  • Multiple layer image
  • T-LBP
  • maximum gradient difference
  • text detection

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Text detection in born-digital images using multiple layer images'. Together they form a unique fingerprint.

Cite this