Home > Research > Publications & Outputs > Learning the features of diabetic retinopathy w...
View graph of relations

Learning the features of diabetic retinopathy with convolutional neural networks

Research output: Contribution to Journal/MagazineMeeting abstractpeer-review

Published

Standard

Learning the features of diabetic retinopathy with convolutional neural networks. / Pratt, H.; Williams, B. M.; Broadbent, D.; Harding, S. P.; Coenen, F.; Zheng, Y.

In: EUROPEAN JOURNAL OF OPHTHALMOLOGY, Vol. 29, No. 3, 16.05.2019, p. NP15-NP16.

Research output: Contribution to Journal/MagazineMeeting abstractpeer-review

Harvard

Pratt, H, Williams, BM, Broadbent, D, Harding, SP, Coenen, F & Zheng, Y 2019, 'Learning the features of diabetic retinopathy with convolutional neural networks', EUROPEAN JOURNAL OF OPHTHALMOLOGY, vol. 29, no. 3, pp. NP15-NP16. <https://journals.sagepub.com/doi/full/10.1177/1120672119847084>

APA

Pratt, H., Williams, B. M., Broadbent, D., Harding, S. P., Coenen, F., & Zheng, Y. (2019). Learning the features of diabetic retinopathy with convolutional neural networks. EUROPEAN JOURNAL OF OPHTHALMOLOGY, 29(3), NP15-NP16. https://journals.sagepub.com/doi/full/10.1177/1120672119847084

Vancouver

Pratt H, Williams BM, Broadbent D, Harding SP, Coenen F, Zheng Y. Learning the features of diabetic retinopathy with convolutional neural networks. EUROPEAN JOURNAL OF OPHTHALMOLOGY. 2019 May 16;29(3):NP15-NP16.

Author

Pratt, H. ; Williams, B. M. ; Broadbent, D. ; Harding, S. P. ; Coenen, F. ; Zheng, Y. / Learning the features of diabetic retinopathy with convolutional neural networks. In: EUROPEAN JOURNAL OF OPHTHALMOLOGY. 2019 ; Vol. 29, No. 3. pp. NP15-NP16.

Bibtex

@article{242281380ada49a09281c77bcf3bdc15,
title = "Learning the features of diabetic retinopathy with convolutional neural networks",
abstract = "Design: This is a study to evaluate machine learning feature extraction approaches on an established public dataset of retinal images in the diagnosis of diabetic retinopathy (DR).Purpose: Convolutional Neural Networks (CNNs) have been demonstrated to achieve state-of-the-art results on many complex computer vision tasks including the automated diagnosis of diseases such as DR. CNNs are a powerful development of machine learning, which not only produce excellent classification results based on image features but also determine the relevant features automatically. However, the current inability to demonstrate what these features are has led to CNN-based approaches being considered “black box” methods, which are difficult to accept. In this work, we demonstrate two methods for identifying the learned features, applying this to the diagnosing DR from fundus images.Methods: Our aim is to demonstrate the successful identification of the relevant features determined during the classification process. We build two identification methods into the established DenseNet architecture and evaluate for DR diagnosis. We use a large dataset of over 88k retinal fundus images (http://www.eyepacs.com/), which are classified in terms of the presence and severity of DR, i.e. [no, mild, moderate, severe, proliferative]. We train on a random subset of 78,076 (88%) images, reserving 10,626 for testing, extracting feature maps for each test image, which identify the details that have led most strongly to the CNN prediction.Results: Using this approach, we have been able to determine on an image-basis the regions identified within the CNN as being relevant. E.g. for severe DR, haemorrhages, microaneurysms and cotton wool spots have been identified as contributing to the diagnosis; while laser spots, neovascularisation and venous reduplication have been automatically identified as corresponding to proliferative DR.Conclusions: This feature extraction has great potential for providing a method of identifying and visualising the features that have contributed to the automated classification, which is an important element in encouraging confidence in CNN-based approaches from users and clinicians. This work can also aid in the validation and further development of CNN methods, with the potential for allowing previously unidentified yet relevant features to be determined.",
author = "H. Pratt and Williams, {B. M.} and D. Broadbent and Harding, {S. P.} and F. Coenen and Y. Zheng",
year = "2019",
month = may,
day = "16",
language = "English",
volume = "29",
pages = "NP15--NP16",
journal = "EUROPEAN JOURNAL OF OPHTHALMOLOGY",
issn = "1120-6721",
publisher = "Wichtig Publishing Srl",
number = "3",

}

RIS

TY - JOUR

T1 - Learning the features of diabetic retinopathy with convolutional neural networks

AU - Pratt, H.

AU - Williams, B. M.

AU - Broadbent, D.

AU - Harding, S. P.

AU - Coenen, F.

AU - Zheng, Y.

PY - 2019/5/16

Y1 - 2019/5/16

N2 - Design: This is a study to evaluate machine learning feature extraction approaches on an established public dataset of retinal images in the diagnosis of diabetic retinopathy (DR).Purpose: Convolutional Neural Networks (CNNs) have been demonstrated to achieve state-of-the-art results on many complex computer vision tasks including the automated diagnosis of diseases such as DR. CNNs are a powerful development of machine learning, which not only produce excellent classification results based on image features but also determine the relevant features automatically. However, the current inability to demonstrate what these features are has led to CNN-based approaches being considered “black box” methods, which are difficult to accept. In this work, we demonstrate two methods for identifying the learned features, applying this to the diagnosing DR from fundus images.Methods: Our aim is to demonstrate the successful identification of the relevant features determined during the classification process. We build two identification methods into the established DenseNet architecture and evaluate for DR diagnosis. We use a large dataset of over 88k retinal fundus images (http://www.eyepacs.com/), which are classified in terms of the presence and severity of DR, i.e. [no, mild, moderate, severe, proliferative]. We train on a random subset of 78,076 (88%) images, reserving 10,626 for testing, extracting feature maps for each test image, which identify the details that have led most strongly to the CNN prediction.Results: Using this approach, we have been able to determine on an image-basis the regions identified within the CNN as being relevant. E.g. for severe DR, haemorrhages, microaneurysms and cotton wool spots have been identified as contributing to the diagnosis; while laser spots, neovascularisation and venous reduplication have been automatically identified as corresponding to proliferative DR.Conclusions: This feature extraction has great potential for providing a method of identifying and visualising the features that have contributed to the automated classification, which is an important element in encouraging confidence in CNN-based approaches from users and clinicians. This work can also aid in the validation and further development of CNN methods, with the potential for allowing previously unidentified yet relevant features to be determined.

AB - Design: This is a study to evaluate machine learning feature extraction approaches on an established public dataset of retinal images in the diagnosis of diabetic retinopathy (DR).Purpose: Convolutional Neural Networks (CNNs) have been demonstrated to achieve state-of-the-art results on many complex computer vision tasks including the automated diagnosis of diseases such as DR. CNNs are a powerful development of machine learning, which not only produce excellent classification results based on image features but also determine the relevant features automatically. However, the current inability to demonstrate what these features are has led to CNN-based approaches being considered “black box” methods, which are difficult to accept. In this work, we demonstrate two methods for identifying the learned features, applying this to the diagnosing DR from fundus images.Methods: Our aim is to demonstrate the successful identification of the relevant features determined during the classification process. We build two identification methods into the established DenseNet architecture and evaluate for DR diagnosis. We use a large dataset of over 88k retinal fundus images (http://www.eyepacs.com/), which are classified in terms of the presence and severity of DR, i.e. [no, mild, moderate, severe, proliferative]. We train on a random subset of 78,076 (88%) images, reserving 10,626 for testing, extracting feature maps for each test image, which identify the details that have led most strongly to the CNN prediction.Results: Using this approach, we have been able to determine on an image-basis the regions identified within the CNN as being relevant. E.g. for severe DR, haemorrhages, microaneurysms and cotton wool spots have been identified as contributing to the diagnosis; while laser spots, neovascularisation and venous reduplication have been automatically identified as corresponding to proliferative DR.Conclusions: This feature extraction has great potential for providing a method of identifying and visualising the features that have contributed to the automated classification, which is an important element in encouraging confidence in CNN-based approaches from users and clinicians. This work can also aid in the validation and further development of CNN methods, with the potential for allowing previously unidentified yet relevant features to be determined.

M3 - Meeting abstract

VL - 29

SP - NP15-NP16

JO - EUROPEAN JOURNAL OF OPHTHALMOLOGY

JF - EUROPEAN JOURNAL OF OPHTHALMOLOGY

SN - 1120-6721

IS - 3

ER -