Home > Research > Publications & Outputs > Software defect prediction using static code me...

Links

Text available via DOI:

View graph of relations

Software defect prediction using static code metrics underestimates defect-proneness

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Software defect prediction using static code metrics underestimates defect-proneness. / Gray, D.; Bowes, D.; Davey, N. et al.
The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. p. 1-7.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Gray, D, Bowes, D, Davey, N, Sun, Y & Christianson, B 2010, Software defect prediction using static code metrics underestimates defect-proneness. in The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, pp. 1-7. https://doi.org/10.1109/IJCNN.2010.5596650

APA

Gray, D., Bowes, D., Davey, N., Sun, Y., & Christianson, B. (2010). Software defect prediction using static code metrics underestimates defect-proneness. In The 2010 International Joint Conference on Neural Networks (IJCNN) (pp. 1-7). IEEE. https://doi.org/10.1109/IJCNN.2010.5596650

Vancouver

Gray D, Bowes D, Davey N, Sun Y, Christianson B. Software defect prediction using static code metrics underestimates defect-proneness. In The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE. 2010. p. 1-7 doi: 10.1109/IJCNN.2010.5596650

Author

Gray, D. ; Bowes, D. ; Davey, N. et al. / Software defect prediction using static code metrics underestimates defect-proneness. The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. pp. 1-7

Bibtex

@inproceedings{4109214bedd340bdaa0f210b1d906dcb,
title = "Software defect prediction using static code metrics underestimates defect-proneness",
abstract = "Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more “confident” in the predictions they made which were correct.",
author = "D. Gray and D. Bowes and N. Davey and Y. Sun and B. Christianson",
year = "2010",
doi = "10.1109/IJCNN.2010.5596650",
language = "English",
isbn = "9781424469178",
pages = "1--7",
booktitle = "The 2010 International Joint Conference on Neural Networks (IJCNN)",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Software defect prediction using static code metrics underestimates defect-proneness

AU - Gray, D.

AU - Bowes, D.

AU - Davey, N.

AU - Sun, Y.

AU - Christianson, B.

PY - 2010

Y1 - 2010

N2 - Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more “confident” in the predictions they made which were correct.

AB - Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more “confident” in the predictions they made which were correct.

U2 - 10.1109/IJCNN.2010.5596650

DO - 10.1109/IJCNN.2010.5596650

M3 - Conference contribution/Paper

SN - 9781424469178

SP - 1

EP - 7

BT - The 2010 International Joint Conference on Neural Networks (IJCNN)

PB - IEEE

ER -