Home > Research > Publications & Outputs > Software defect prediction

Electronic data

  • sensitivity_sqj

    Rights statement: The final publication is available at Springer via http://dx.doi.org/10.1007/s11219-016-9353-3

    Accepted author manuscript, 882 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Software defect prediction: do different classifiers find the same defects?

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Software defect prediction: do different classifiers find the same defects? / Bowes, David; Hall, Tracy; Petrić, Jean.
In: Software Quality Journal, Vol. 26, No. 2, 01.06.2018, p. 525-552.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Bowes D, Hall T, Petrić J. Software defect prediction: do different classifiers find the same defects? Software Quality Journal. 2018 Jun 1;26(2):525-552. Epub 2017 Feb 7. doi: 10.1007/s11219-016-9353-3

Author

Bibtex

@article{7a3001fd35924e11953d90159f395d6a,
title = "Software defect prediction: do different classifiers find the same defects?",
abstract = "During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Na{\"i}ve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.",
keywords = "Machine learning, Prediction modelling, Software defect prediction",
author = "David Bowes and Tracy Hall and Jean Petri{\'c}",
note = "The final publication is available at Springer via http://dx.doi.org/10.1007/s11219-016-9353-3",
year = "2018",
month = jun,
day = "1",
doi = "10.1007/s11219-016-9353-3",
language = "English",
volume = "26",
pages = "525--552",
journal = "Software Quality Journal",
issn = "0963-9314",
publisher = "Springer New York",
number = "2",

}

RIS

TY - JOUR

T1 - Software defect prediction

T2 - do different classifiers find the same defects?

AU - Bowes, David

AU - Hall, Tracy

AU - Petrić, Jean

N1 - The final publication is available at Springer via http://dx.doi.org/10.1007/s11219-016-9353-3

PY - 2018/6/1

Y1 - 2018/6/1

N2 - During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.

AB - During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.

KW - Machine learning

KW - Prediction modelling

KW - Software defect prediction

U2 - 10.1007/s11219-016-9353-3

DO - 10.1007/s11219-016-9353-3

M3 - Journal article

AN - SCOPUS:85011708666

VL - 26

SP - 525

EP - 552

JO - Software Quality Journal

JF - Software Quality Journal

SN - 0963-9314

IS - 2

ER -