Home > Research > Publications & Outputs > Different classifiers find different defects al...

Links

Text available via DOI:

View graph of relations

Different classifiers find different defects although with different level of consistency

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date21/10/2015
Host publicationPROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering
Place of PublicationNew York
PublisherAssociation for Computing Machinery, Inc
Number of pages10
ISBN (electronic)9781450337151
<mark>Original language</mark>English
Event11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015 - Beijing, China
Duration: 21/10/2015 → …

Conference

Conference11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015
Country/TerritoryChina
CityBeijing
Period21/10/15 → …

Conference

Conference11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015
Country/TerritoryChina
CityBeijing
Period21/10/15 → …

Abstract

BACKGROUND - During the last 10 years hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. OBJECTIVE - We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. METHOD - We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in 12 NASA data sets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty is compared against different classifiers. RESULTS - Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. CONCLUSIONS - Our results confirm that a unique sub-set of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Classifier ensembles with decision making strategies not based on majority voting are likely to perform best.