Home > Research > Publications & Outputs > A deep learning architecture for multi-class lu...

Electronic data

  • AEJ- Accepted Manuscript

    Accepted author manuscript, 1.14 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

A deep learning architecture for multi-class lung diseases classification using chest X-ray (CXR) images

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>1/02/2023
<mark>Journal</mark>Alexandria Engineering Journal
Volume64
Number of pages13
Pages (from-to)923-935
Publication StatusPublished
<mark>Original language</mark>English

Abstract

In 2019, the world experienced the rapid outbreak of the Covid-19 pandemic creating an alarming situation worldwide. The virus targets the respiratory system causing pneumonia with other symptoms such as fatigue, dry cough, and fever which can be mistakenly diagnosed as pneumonia, lung cancer, or TB. Thus, the early diagnosis of COVID-19 is critical since the disease can provoke patients’ mortality. Chest X-ray (CXR) is commonly employed in healthcare sector where both quick and precise diagnosis can be supplied. Deep learning algorithms
have proved extraordinary capabilities in terms of lung diseases detection and classification. They facilitate and expedite the diagnosis process and save time for the medical practitioners. In this paper, a deep learning (DL) architecture for multi-class classification of Pneumonia, Lung Cancer, tuberculosis (TB), Lung Opacity, and most recently COVID-19 is proposed. Tremendous CXR images of 3615 COVID-19, 6012 Lung opacity, 5870 Pneumonia, 20,000 lung cancer, 1400 tuberculosis, and 10,192 normal images were resized, normalized, and randomly
split to fit the DL requirements. In terms of classification, we utilized a pre-trained model, VGG19 followed by three blocks of convolutional neural network (CNN) as a feature extraction and fully connected network at the classification stage. The experimental results revealed that our proposed VGG19 + CNN outperformed other existing work with 96.48 % accuracy, 93.75 % recall, 97.56 % precision, 95.62 % F1 score, and 99.82 % area under the curve (AUC). The proposed model delivered superior performance allowing healthcare practitioners to diagnose and treat patients more quickly and efficiently.