Home > Research > Publications & Outputs > Learning the features of diabetic retinopathy w...
View graph of relations

Learning the features of diabetic retinopathy with convolutional neural networks

Research output: Contribution to Journal/MagazineMeeting abstractpeer-review

Published
Close
<mark>Journal publication date</mark>16/05/2019
<mark>Journal</mark>EUROPEAN JOURNAL OF OPHTHALMOLOGY
Issue number3
Volume29
Number of pages2
Pages (from-to)NP15-NP16
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Design: This is a study to evaluate machine learning feature extraction approaches on an established public dataset of retinal images in the diagnosis of diabetic retinopathy (DR).

Purpose: Convolutional Neural Networks (CNNs) have been demonstrated to achieve state-of-the-art results on many complex computer vision tasks including the automated diagnosis of diseases such as DR. CNNs are a powerful development of machine learning, which not only produce excellent classification results based on image features but also determine the relevant features automatically. However, the current inability to demonstrate what these features are has led to CNN-based approaches being considered “black box” methods, which are difficult to accept. In this work, we demonstrate two methods for identifying the learned features, applying this to the diagnosing DR from fundus images.

Methods: Our aim is to demonstrate the successful identification of the relevant features determined during the classification process. We build two identification methods into the established DenseNet architecture and evaluate for DR diagnosis. We use a large dataset of over 88k retinal fundus images (http://www.eyepacs.com/), which are classified in terms of the presence and severity of DR, i.e. [no, mild, moderate, severe, proliferative]. We train on a random subset of 78,076 (88%) images, reserving 10,626 for testing, extracting feature maps for each test image, which identify the details that have led most strongly to the CNN prediction.

Results: Using this approach, we have been able to determine on an image-basis the regions identified within the CNN as being relevant. E.g. for severe DR, haemorrhages, microaneurysms and cotton wool spots have been identified as contributing to the diagnosis; while laser spots, neovascularisation and venous reduplication have been automatically identified as corresponding to proliferative DR.

Conclusions: This feature extraction has great potential for providing a method of identifying and visualising the features that have contributed to the automated classification, which is an important element in encouraging confidence in CNN-based approaches from users and clinicians. This work can also aid in the validation and further development of CNN methods, with the potential for allowing previously unidentified yet relevant features to be determined.