Abstract
Background: Due to the recent advances in deep learning, this model attracted researchers who have applied it to medical image analysis. However, pathological image analysis based on deep learning networks faces a number of challenges, such as the high resolution (gigapixel) of pathological images and the lack of annotation capabilities. To address these challenges, we propose a training strategy called deep-reverse active learning (DRAL) and atrous DenseNet (ADN) for pathological image classification. The proposed DRAL can improve the classification accuracy of widely used deep learning networks such as VGG-16 and ResNet by removing mislabeled patches in the training set. As the size of a cancer area varies widely in pathological images, the proposed ADN integrates the atrous convolutions with the dense block for multiscale feature extraction. Results: The proposed DRAL and ADN are evaluated using the following three pathological datasets: BACH, CCG, and UCSB. The experiment results demonstrate the excellent performance of the proposed DRAL + ADN framework, achieving patch-level average classification accuracies (ACA) of 94.10%, 92.05% and 97.63% on the BACH, CCG, and UCSB validation sets, respectively. Conclusions: The DRAL + ADN framework is a potential candidate for boosting the performance of deep learning models for partially mislabeled training datasets.
Original language | English |
---|---|
Article number | 445 |
Journal | BMC Bioinformatics |
Volume | 20 |
Issue number | 1 |
DOIs | |
Publication status | Published - 28 Aug 2019 |
Externally published | Yes |
Keywords
- Active learning
- Atrous convolution
- Pathological image classification
- deep learning
ASJC Scopus subject areas
- Structural Biology
- Biochemistry
- Molecular Biology
- Computer Science Applications
- Applied Mathematics