Home > Research > Publications & Outputs > Further thoughts on precision

Links

Text available via DOI:

View graph of relations

Further thoughts on precision

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Further thoughts on precision. / Gray, D.; Bowes, D.; Davey, N. et al.
15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011). IEEE, 2011. p. 129-133.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Gray, D, Bowes, D, Davey, N, Sun, Y & Christianson, B 2011, Further thoughts on precision. in 15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011). IEEE, pp. 129-133. https://doi.org/10.1049/ic.2011.0016

APA

Gray, D., Bowes, D., Davey, N., Sun, Y., & Christianson, B. (2011). Further thoughts on precision. In 15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011) (pp. 129-133). IEEE. https://doi.org/10.1049/ic.2011.0016

Vancouver

Gray D, Bowes D, Davey N, Sun Y, Christianson B. Further thoughts on precision. In 15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011). IEEE. 2011. p. 129-133 doi: 10.1049/ic.2011.0016

Author

Gray, D. ; Bowes, D. ; Davey, N. et al. / Further thoughts on precision. 15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011). IEEE, 2011. pp. 129-133

Bibtex

@inproceedings{b58b1e78942445b5a11399af714a4784,
title = "Further thoughts on precision",
abstract = "Background: There has been much discussion amongst automated software defect prediction researchers regarding use of the precision and false positive rate classifier performance metrics. Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance. Method: Well documented examples of how dependent class distribution affects the suitability of performance measures. Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performance.",
author = "D. Gray and D. Bowes and N. Davey and Y. Sun and B. Christianson",
year = "2011",
doi = "10.1049/ic.2011.0016",
language = "English",
isbn = "9781849195096",
pages = "129--133",
booktitle = "15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011)",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Further thoughts on precision

AU - Gray, D.

AU - Bowes, D.

AU - Davey, N.

AU - Sun, Y.

AU - Christianson, B.

PY - 2011

Y1 - 2011

N2 - Background: There has been much discussion amongst automated software defect prediction researchers regarding use of the precision and false positive rate classifier performance metrics. Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance. Method: Well documented examples of how dependent class distribution affects the suitability of performance measures. Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performance.

AB - Background: There has been much discussion amongst automated software defect prediction researchers regarding use of the precision and false positive rate classifier performance metrics. Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance. Method: Well documented examples of how dependent class distribution affects the suitability of performance measures. Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performance.

U2 - 10.1049/ic.2011.0016

DO - 10.1049/ic.2011.0016

M3 - Conference contribution/Paper

SN - 9781849195096

SP - 129

EP - 133

BT - 15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011)

PB - IEEE

ER -