Home > Research > Publications & Outputs > Comprehensive Evaluations of Student Performanc...

Links

Text available via DOI:

View graph of relations

Comprehensive Evaluations of Student Performance Estimation via Machine Learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Comprehensive Evaluations of Student Performance Estimation via Machine Learning. / Mohammad, Ahmad Saeed; Al-Kaltakchi, Musab T. S.; Alshehabi Al-Ani, Jabir et al.
In: Mathematics, Vol. 11, No. 14, 3153, 18.07.2023.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Mohammad, AS, Al-Kaltakchi, MTS, Alshehabi Al-Ani, J, Chambers, JA & Kang, Z (ed.) 2023, 'Comprehensive Evaluations of Student Performance Estimation via Machine Learning', Mathematics, vol. 11, no. 14, 3153. https://doi.org/10.3390/math11143153

APA

Mohammad, A. S., Al-Kaltakchi, M. T. S., Alshehabi Al-Ani, J., Chambers, J. A., & Kang, Z. (Ed.) (2023). Comprehensive Evaluations of Student Performance Estimation via Machine Learning. Mathematics, 11(14), Article 3153. https://doi.org/10.3390/math11143153

Vancouver

Mohammad AS, Al-Kaltakchi MTS, Alshehabi Al-Ani J, Chambers JA, Kang Z, (ed.). Comprehensive Evaluations of Student Performance Estimation via Machine Learning. Mathematics. 2023 Jul 18;11(14):3153. doi: 10.3390/math11143153

Author

Mohammad, Ahmad Saeed ; Al-Kaltakchi, Musab T. S. ; Alshehabi Al-Ani, Jabir et al. / Comprehensive Evaluations of Student Performance Estimation via Machine Learning. In: Mathematics. 2023 ; Vol. 11, No. 14.

Bibtex

@article{a646467f1efe45d19494c76f5fb04a14,
title = "Comprehensive Evaluations of Student Performance Estimation via Machine Learning",
abstract = "Success in student learning is the primary aim of the educational system. Artificial intelligence utilizes data and machine learning to achieve excellence in student learning. In this paper, we exploit several machine learning techniques to estimate early student performance. Two main simulations are used for the evaluation. The first simulation used the Traditional Machine Learning Classifiers (TMLCs) applied to the House dataset, and they are Gaussian Na{\"i}ve Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Multi-Layer Perceptron (MLP), Random Forest (RF), Linear Discriminant Analysis (LDA), and Quadratic Discriminant Analysis (QDA). The best results were achieved with the MLP classifier with a division of 80% training and 20% testing, with an accuracy of 88.89%. The fusion of these seven classifiers was also applied and the highest result was equal to the MLP. Moreover, in the second simulation, the Convolutional Neural Network (CNN) was utilized and evaluated on five main datasets, namely, House, Western Ontario University (WOU), Experience Application Programming Interface (XAPI), University of California-Irvine (UCI), and Analytics Vidhya (AV). The UCI dataset was subdivided into three datasets, namely, UCI-Math, UCI-Por, and UCI-Fused. Moreover, the AV dataset has three targets which are Math, Reading, and Writing. The best accuracy results were achieved at 97.5%, 99.55%, 98.57%, 99.28%, 99.40%, 99.67%, 92.93%, 96.99%, and 96.84% for the House, WOU, XAPI, UCI-Math, UCI-Por, UCI-Fused, AV-Math, AV-Reading, and AV-Writing datasets, respectively, under the same protocol of evaluation. The system demonstrates that the proposed CNN-based method surpasses all seven conventional methods and other state-of-the-art-work.",
keywords = "support vector machine, multi-layer perceptron, quadratic discriminant analysis, linear discriminant analysis, convolutional neural network, machine learning, random forest, 68T07, decision tree, student performance, Gaussian Na{\"i}ve Bayes",
author = "Mohammad, {Ahmad Saeed} and Al-Kaltakchi, {Musab T. S.} and {Alshehabi Al-Ani}, Jabir and Chambers, {Jonathon A.} and Zhao Kang",
year = "2023",
month = jul,
day = "18",
doi = "10.3390/math11143153",
language = "English",
volume = "11",
journal = "Mathematics",
issn = "2227-7390",
publisher = "MDPI AG",
number = "14",

}

RIS

TY - JOUR

T1 - Comprehensive Evaluations of Student Performance Estimation via Machine Learning

AU - Mohammad, Ahmad Saeed

AU - Al-Kaltakchi, Musab T. S.

AU - Alshehabi Al-Ani, Jabir

AU - Chambers, Jonathon A.

A2 - Kang, Zhao

PY - 2023/7/18

Y1 - 2023/7/18

N2 - Success in student learning is the primary aim of the educational system. Artificial intelligence utilizes data and machine learning to achieve excellence in student learning. In this paper, we exploit several machine learning techniques to estimate early student performance. Two main simulations are used for the evaluation. The first simulation used the Traditional Machine Learning Classifiers (TMLCs) applied to the House dataset, and they are Gaussian Naïve Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Multi-Layer Perceptron (MLP), Random Forest (RF), Linear Discriminant Analysis (LDA), and Quadratic Discriminant Analysis (QDA). The best results were achieved with the MLP classifier with a division of 80% training and 20% testing, with an accuracy of 88.89%. The fusion of these seven classifiers was also applied and the highest result was equal to the MLP. Moreover, in the second simulation, the Convolutional Neural Network (CNN) was utilized and evaluated on five main datasets, namely, House, Western Ontario University (WOU), Experience Application Programming Interface (XAPI), University of California-Irvine (UCI), and Analytics Vidhya (AV). The UCI dataset was subdivided into three datasets, namely, UCI-Math, UCI-Por, and UCI-Fused. Moreover, the AV dataset has three targets which are Math, Reading, and Writing. The best accuracy results were achieved at 97.5%, 99.55%, 98.57%, 99.28%, 99.40%, 99.67%, 92.93%, 96.99%, and 96.84% for the House, WOU, XAPI, UCI-Math, UCI-Por, UCI-Fused, AV-Math, AV-Reading, and AV-Writing datasets, respectively, under the same protocol of evaluation. The system demonstrates that the proposed CNN-based method surpasses all seven conventional methods and other state-of-the-art-work.

AB - Success in student learning is the primary aim of the educational system. Artificial intelligence utilizes data and machine learning to achieve excellence in student learning. In this paper, we exploit several machine learning techniques to estimate early student performance. Two main simulations are used for the evaluation. The first simulation used the Traditional Machine Learning Classifiers (TMLCs) applied to the House dataset, and they are Gaussian Naïve Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Multi-Layer Perceptron (MLP), Random Forest (RF), Linear Discriminant Analysis (LDA), and Quadratic Discriminant Analysis (QDA). The best results were achieved with the MLP classifier with a division of 80% training and 20% testing, with an accuracy of 88.89%. The fusion of these seven classifiers was also applied and the highest result was equal to the MLP. Moreover, in the second simulation, the Convolutional Neural Network (CNN) was utilized and evaluated on five main datasets, namely, House, Western Ontario University (WOU), Experience Application Programming Interface (XAPI), University of California-Irvine (UCI), and Analytics Vidhya (AV). The UCI dataset was subdivided into three datasets, namely, UCI-Math, UCI-Por, and UCI-Fused. Moreover, the AV dataset has three targets which are Math, Reading, and Writing. The best accuracy results were achieved at 97.5%, 99.55%, 98.57%, 99.28%, 99.40%, 99.67%, 92.93%, 96.99%, and 96.84% for the House, WOU, XAPI, UCI-Math, UCI-Por, UCI-Fused, AV-Math, AV-Reading, and AV-Writing datasets, respectively, under the same protocol of evaluation. The system demonstrates that the proposed CNN-based method surpasses all seven conventional methods and other state-of-the-art-work.

KW - support vector machine

KW - multi-layer perceptron

KW - quadratic discriminant analysis

KW - linear discriminant analysis

KW - convolutional neural network

KW - machine learning

KW - random forest

KW - 68T07

KW - decision tree

KW - student performance

KW - Gaussian Naïve Bayes

U2 - 10.3390/math11143153

DO - 10.3390/math11143153

M3 - Journal article

VL - 11

JO - Mathematics

JF - Mathematics

SN - 2227-7390

IS - 14

M1 - 3153

ER -