Glaucoma Detection Based on Deep Learning Network in Fundus Image

Huazhu Fu, Jun Cheng, Yanwu Xu, Jiang Liu

Research output: Chapter in Book/Conference proceedingBook Chapterpeer-review

30 Citations (Scopus)

Abstract

Glaucoma is a chronic eye disease that leads to irreversible vision loss. In this chapter, we introduce two state-of-the-art glaucoma detection methods based on deep learning technique. The first is the multi-label segmentation network, named M-Net, which solves the optic disc and optic cup segmentation jointly. M-Net contains a multi-scale U-shape convolutional network with the side-output layer to learn discriminative representations and produces segmentation probability map. Then the vertical cup to disc ratio (CDR) is calculated based on segmented optic disc and cup to assess the glaucoma risk. The second network is the disc-aware ensemble network, named DENet, which integrates the deep hierarchical context of the global fundus image and the local optic disc region. Four deep streams on different levels and modules are, respectively, considered as global image stream, segmentation-guided network, local disc region stream, and disc polar transformation stream. The DENet produces the glaucoma detection result from the image directly without segmentation. Finally, we compare two deep learning methods with other related methods on several glaucoma detection datasets.

Original languageEnglish
Title of host publicationAdvances in Computer Vision and Pattern Recognition
PublisherSpringer London
Pages119-137
Number of pages19
DOIs
Publication statusPublished - 2019
Externally publishedYes

Publication series

NameAdvances in Computer Vision and Pattern Recognition
ISSN (Print)2191-6586
ISSN (Electronic)2191-6594

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Glaucoma Detection Based on Deep Learning Network in Fundus Image'. Together they form a unique fingerprint.

Cite this