TY - GEN
T1 - Conditional Adversarial Transfer for Glaucoma Diagnosis
AU - Wang, Jingwen
AU - Yan, Yuguang
AU - Xu, Yanwu
AU - Zhao, Wei
AU - Min, Huaqing
AU - Tan, Mingkui
AU - Liu, Jiang
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - Deep learning has achieved great success in image classification task when given sufficient labeled training images. However, in fundus image based glaucoma diagnosis, we often have very limited training data due to expensive cost in data labeling. Moreover, when facing a new application environment, it is difficult to train a network with limited labeled training images. In this case, some images from some auxiliary domains (i.e., source domain) could be exploited to improve the performance. Unfortunately, direct using the source domain data may not achieve promising performance for the domain of interest (i.e., target domain) due to reasons like distribution discrepancy between two domains. In this paper, focusing on glaucoma diagnosis, we propose a deep adversarial transfer learning method conditioned on label information to match the distributions of source and target domains, so that the labeled source images can be leveraged to improve the classification performance in the target domain. Different from the most existing adversarial transfer learning methods which consider marginal distribution matching only, we seek to match the label conditional distributions by handling images with different labels separately. We conduct experiments on three glaucoma datasets and adopt multiple evaluation metrics to verify the effectiveness of our proposed method.
AB - Deep learning has achieved great success in image classification task when given sufficient labeled training images. However, in fundus image based glaucoma diagnosis, we often have very limited training data due to expensive cost in data labeling. Moreover, when facing a new application environment, it is difficult to train a network with limited labeled training images. In this case, some images from some auxiliary domains (i.e., source domain) could be exploited to improve the performance. Unfortunately, direct using the source domain data may not achieve promising performance for the domain of interest (i.e., target domain) due to reasons like distribution discrepancy between two domains. In this paper, focusing on glaucoma diagnosis, we propose a deep adversarial transfer learning method conditioned on label information to match the distributions of source and target domains, so that the labeled source images can be leveraged to improve the classification performance in the target domain. Different from the most existing adversarial transfer learning methods which consider marginal distribution matching only, we seek to match the label conditional distributions by handling images with different labels separately. We conduct experiments on three glaucoma datasets and adopt multiple evaluation metrics to verify the effectiveness of our proposed method.
UR - http://www.scopus.com/inward/record.url?scp=85075467448&partnerID=8YFLogxK
U2 - 10.1109/EMBC.2019.8857308
DO - 10.1109/EMBC.2019.8857308
M3 - Conference contribution
C2 - 31946300
AN - SCOPUS:85075467448
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 2032
EP - 2035
BT - 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2019
Y2 - 23 July 2019 through 27 July 2019
ER -