Home > Research > Publications & Outputs > Code cleaning for software defect prediction

Links

Text available via DOI:

View graph of relations

Code cleaning for software defect prediction: A cautionary tale

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Code cleaning for software defect prediction: A cautionary tale. / Shippey, T.; Bowes, D.; Counsell, S. et al.
2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 2018. p. 239-243.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Shippey, T, Bowes, D, Counsell, S & Hall, T 2018, Code cleaning for software defect prediction: A cautionary tale. in 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, pp. 239-243. https://doi.org/10.1109/SEAA.2018.00047

APA

Shippey, T., Bowes, D., Counsell, S., & Hall, T. (2018). Code cleaning for software defect prediction: A cautionary tale. In 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) (pp. 239-243). IEEE. https://doi.org/10.1109/SEAA.2018.00047

Vancouver

Shippey T, Bowes D, Counsell S, Hall T. Code cleaning for software defect prediction: A cautionary tale. In 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE. 2018. p. 239-243 doi: 10.1109/SEAA.2018.00047

Author

Shippey, T. ; Bowes, D. ; Counsell, S. et al. / Code cleaning for software defect prediction : A cautionary tale. 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 2018. pp. 239-243

Bibtex

@inproceedings{aa9f82da14354992bef783dcf9e87fef,
title = "Code cleaning for software defect prediction: A cautionary tale",
abstract = "In this paper, we describe our experience of developing a new technique to improve defect prediction (code cleaning) which performed very encouragingly on the first two systems on which we evaluated it (both systems had their origins in one company). Code cleaning also worked well on an additional open source system (Eclipse). But our code cleaning technique then performed disappointingly on all 69 subsequent open source systems on which we evaluated it. Without our round twoevaluations on these 69 open source systems we would have published misleading prediction results. We discuss the need for performance evaluations to be performed on carefully selected samples of systems if reliable conclusions are to be drawn.",
author = "T. Shippey and D. Bowes and S. Counsell and T. Hall",
year = "2018",
month = aug,
day = "29",
doi = "10.1109/SEAA.2018.00047",
language = "English",
pages = "239--243",
booktitle = "2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Code cleaning for software defect prediction

T2 - A cautionary tale

AU - Shippey, T.

AU - Bowes, D.

AU - Counsell, S.

AU - Hall, T.

PY - 2018/8/29

Y1 - 2018/8/29

N2 - In this paper, we describe our experience of developing a new technique to improve defect prediction (code cleaning) which performed very encouragingly on the first two systems on which we evaluated it (both systems had their origins in one company). Code cleaning also worked well on an additional open source system (Eclipse). But our code cleaning technique then performed disappointingly on all 69 subsequent open source systems on which we evaluated it. Without our round twoevaluations on these 69 open source systems we would have published misleading prediction results. We discuss the need for performance evaluations to be performed on carefully selected samples of systems if reliable conclusions are to be drawn.

AB - In this paper, we describe our experience of developing a new technique to improve defect prediction (code cleaning) which performed very encouragingly on the first two systems on which we evaluated it (both systems had their origins in one company). Code cleaning also worked well on an additional open source system (Eclipse). But our code cleaning technique then performed disappointingly on all 69 subsequent open source systems on which we evaluated it. Without our round twoevaluations on these 69 open source systems we would have published misleading prediction results. We discuss the need for performance evaluations to be performed on carefully selected samples of systems if reliable conclusions are to be drawn.

U2 - 10.1109/SEAA.2018.00047

DO - 10.1109/SEAA.2018.00047

M3 - Conference contribution/Paper

SP - 239

EP - 243

BT - 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)

PB - IEEE

ER -