Home > Research > Publications & Outputs > Different classifiers find different defects al...

Links

Text available via DOI:

View graph of relations

Different classifiers find different defects although with different level of consistency

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Different classifiers find different defects although with different level of consistency. / Bowes, David; Hall, Tracy; Petrić, Jean.
PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering. New York: Association for Computing Machinery, Inc, 2015.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Bowes, D, Hall, T & Petrić, J 2015, Different classifiers find different defects although with different level of consistency. in PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering. Association for Computing Machinery, Inc, New York, 11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015, Beijing, China, 21/10/15. https://doi.org/10.1145/2810146.2810149

APA

Bowes, D., Hall, T., & Petrić, J. (2015). Different classifiers find different defects although with different level of consistency. In PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering Association for Computing Machinery, Inc. https://doi.org/10.1145/2810146.2810149

Vancouver

Bowes D, Hall T, Petrić J. Different classifiers find different defects although with different level of consistency. In PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering. New York: Association for Computing Machinery, Inc. 2015 doi: 10.1145/2810146.2810149

Author

Bowes, David ; Hall, Tracy ; Petrić, Jean. / Different classifiers find different defects although with different level of consistency. PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering. New York : Association for Computing Machinery, Inc, 2015.

Bibtex

@inproceedings{afac11d240f943dbb476af6929612def,
title = "Different classifiers find different defects although with different level of consistency",
abstract = "BACKGROUND - During the last 10 years hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. OBJECTIVE - We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. METHOD - We perform a sensitivity analysis to compare the performance of Random Forest, Na{\"i}ve Bayes, RPart and SVM classifiers when predicting defects in 12 NASA data sets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty is compared against different classifiers. RESULTS - Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. CONCLUSIONS - Our results confirm that a unique sub-set of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Classifier ensembles with decision making strategies not based on majority voting are likely to perform best.",
author = "David Bowes and Tracy Hall and Jean Petri{\'c}",
year = "2015",
month = oct,
day = "21",
doi = "10.1145/2810146.2810149",
language = "English",
booktitle = "PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering",
publisher = "Association for Computing Machinery, Inc",
note = "11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015 ; Conference date: 21-10-2015",

}

RIS

TY - GEN

T1 - Different classifiers find different defects although with different level of consistency

AU - Bowes, David

AU - Hall, Tracy

AU - Petrić, Jean

PY - 2015/10/21

Y1 - 2015/10/21

N2 - BACKGROUND - During the last 10 years hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. OBJECTIVE - We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. METHOD - We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in 12 NASA data sets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty is compared against different classifiers. RESULTS - Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. CONCLUSIONS - Our results confirm that a unique sub-set of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Classifier ensembles with decision making strategies not based on majority voting are likely to perform best.

AB - BACKGROUND - During the last 10 years hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. OBJECTIVE - We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. METHOD - We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in 12 NASA data sets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty is compared against different classifiers. RESULTS - Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. CONCLUSIONS - Our results confirm that a unique sub-set of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Classifier ensembles with decision making strategies not based on majority voting are likely to perform best.

U2 - 10.1145/2810146.2810149

DO - 10.1145/2810146.2810149

M3 - Conference contribution/Paper

AN - SCOPUS:84947607088

BT - PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering

PB - Association for Computing Machinery, Inc

CY - New York

T2 - 11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015

Y2 - 21 October 2015

ER -