ALDII: Adaptive Learning-based Document Image Inpainting to enhance the handwritten Chinese character legibility of human and machine

Research output: Journal PublicationArticlepeer-review

Abstract

Document Image Inpainting (DII) has been applied to degraded documents, including financial and historical documents, to enhance the legibility of images for: (1) human readers by providing high visual quality images; and (2) machine recognizers such as Optical Character Recognition (OCR), thereby reducing recognition errors. With the advent of Deep Learning (DL), DL-based DII methods have achieved remarkable enhancements in terms of either human or machine legibility. However, focusing on improving machine legibility causes visual image degradation, affecting human readability. To address this contradiction, we propose an adaptive learning-based DII method, namely ALDII, that applies domain adaptation strategy, our approach acts like a plug-in module that is capable of constraining a total feature space before optimizing legibility of human and machine, respectively. We evaluate our ALDII on a Chinese handwritten character dataset, which includes single-character and text-line images. Compared to other state-of-the-art approaches, experimental results demonstrated superior performance of our ALDII with metrics of both human and machine legibility.
Original languageEnglish
Article number128897
JournalNeurocomputing
Volume616
DOIs
Publication statusPublished Online - Feb 2025

Keywords

  • Document image inpainting
  • Domain adaptation
  • Blind image inpainting
  • Optical Character Recognition (OCR)

Fingerprint

Dive into the research topics of 'ALDII: Adaptive Learning-based Document Image Inpainting to enhance the handwritten Chinese character legibility of human and machine'. Together they form a unique fingerprint.

Cite this