Home > Research > Publications & Outputs > Blended multi-modal deep convnet features for d...

Links

Text available via DOI:

View graph of relations

Blended multi-modal deep convnet features for diabetic retinopathy severity prediction

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Blended multi-modal deep convnet features for diabetic retinopathy severity prediction. / Bodapati, Jyostna Devi; Veeranjaneyulu, N.; Shareef, Shaik Nagur et al.
In: Electronics (Switzerland), Vol. 9, No. 6, 914, 06.2020.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Bodapati, JD, Veeranjaneyulu, N, Shareef, SN, Hakak, S, Bilal, M, Maddikunta, PKR & Jo, O 2020, 'Blended multi-modal deep convnet features for diabetic retinopathy severity prediction', Electronics (Switzerland), vol. 9, no. 6, 914. https://doi.org/10.3390/electronics9060914

APA

Bodapati, J. D., Veeranjaneyulu, N., Shareef, S. N., Hakak, S., Bilal, M., Maddikunta, P. K. R., & Jo, O. (2020). Blended multi-modal deep convnet features for diabetic retinopathy severity prediction. Electronics (Switzerland), 9(6), Article 914. https://doi.org/10.3390/electronics9060914

Vancouver

Bodapati JD, Veeranjaneyulu N, Shareef SN, Hakak S, Bilal M, Maddikunta PKR et al. Blended multi-modal deep convnet features for diabetic retinopathy severity prediction. Electronics (Switzerland). 2020 Jun;9(6):914. doi: 10.3390/electronics9060914

Author

Bodapati, Jyostna Devi ; Veeranjaneyulu, N. ; Shareef, Shaik Nagur et al. / Blended multi-modal deep convnet features for diabetic retinopathy severity prediction. In: Electronics (Switzerland). 2020 ; Vol. 9, No. 6.

Bibtex

@article{58b0cceadcd54ecd8534429699ff68a4,
title = "Blended multi-modal deep convnet features for diabetic retinopathy severity prediction",
abstract = "Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.",
keywords = "1D pooling, Cross pooling, Diabetic retinopathy (DR), Pni-modal deep features, Pre-trained deep ConvNet, Pulti-modal deep features, Transfer learning",
author = "Bodapati, {Jyostna Devi} and N. Veeranjaneyulu and Shareef, {Shaik Nagur} and Saqib Hakak and Muhammad Bilal and Maddikunta, {Praveen Kumar Reddy} and Ohyun Jo",
year = "2020",
month = jun,
doi = "10.3390/electronics9060914",
language = "English",
volume = "9",
journal = "Electronics (Switzerland)",
issn = "2079-9292",
publisher = "MDPI AG",
number = "6",

}

RIS

TY - JOUR

T1 - Blended multi-modal deep convnet features for diabetic retinopathy severity prediction

AU - Bodapati, Jyostna Devi

AU - Veeranjaneyulu, N.

AU - Shareef, Shaik Nagur

AU - Hakak, Saqib

AU - Bilal, Muhammad

AU - Maddikunta, Praveen Kumar Reddy

AU - Jo, Ohyun

PY - 2020/6

Y1 - 2020/6

N2 - Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.

AB - Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.

KW - 1D pooling

KW - Cross pooling

KW - Diabetic retinopathy (DR)

KW - Pni-modal deep features

KW - Pre-trained deep ConvNet

KW - Pulti-modal deep features

KW - Transfer learning

UR - http://www.scopus.com/inward/record.url?scp=85086046027&partnerID=8YFLogxK

U2 - 10.3390/electronics9060914

DO - 10.3390/electronics9060914

M3 - Journal article

AN - SCOPUS:85086046027

VL - 9

JO - Electronics (Switzerland)

JF - Electronics (Switzerland)

SN - 2079-9292

IS - 6

M1 - 914

ER -