An Annotation-Free Restoration Network for Cataractous Fundus Images

Heng Li, Haofeng Liu, Yan Hu, Huazhu Fu, Yitian Zhao, Hanpei Miao, Jiang Liu

Research output: Journal PublicationArticlepeer-review

32 Citations (Scopus)

Abstract

Cataracts are the leading cause of vision loss worldwide. Restoration algorithms are developed to improve the readability of cataract fundus images in order to increase the certainty in diagnosis and treatment for cataract patients. Unfortunately, the requirement of annotation limits the application of these algorithms in clinics. This paper proposes a network to annotation-freely restore cataractous fundus images (ArcNet) so as to boost the clinical practicability of restoration. Annotations are unnecessary in ArcNet, where the high-frequency component is extracted from fundus images to replace segmentation in the preservation of retinal structures. The restoration model is learned from the synthesized images and adapted to real cataract images. Extensive experiments are implemented to verify the performance and effectiveness of ArcNet. Favorable performance is achieved using ArcNet against state-of-the-art algorithms, and the diagnosis of ocular fundus diseases in cataract patients is promoted by ArcNet. The capability of properly restoring cataractous images in the absence of annotated data promises the proposed algorithm outstanding clinical practicability.

Original languageEnglish
Pages (from-to)1699-1710
Number of pages12
JournalIEEE Transactions on Medical Imaging
Volume41
Issue number7
DOIs
Publication statusPublished - 1 Jul 2022
Externally publishedYes

Keywords

  • Cataracts
  • Domain adaptation
  • Fundus image restoration
  • High-frequency component

ASJC Scopus subject areas

  • Software
  • Radiological and Ultrasound Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'An Annotation-Free Restoration Network for Cataractous Fundus Images'. Together they form a unique fingerprint.

Cite this