Home > Research > Publications & Outputs > Evaluating guidelines for reporting empirical s...
View graph of relations

Evaluating guidelines for reporting empirical software engineering studies

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Evaluating guidelines for reporting empirical software engineering studies. / Kitchenham, Barbara A.; Al-Kilidar, Hiyam; Babar, Muhammad Ali et al.
In: Empirical Software Engineering, Vol. 13, No. 1, 02.2008, p. 97-121.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Kitchenham, BA, Al-Kilidar, H, Babar, MA, Berry, M, Cox, K, Keung, J, Kurniawati, F, Staples, M, Zhang, H & Zhu, L 2008, 'Evaluating guidelines for reporting empirical software engineering studies', Empirical Software Engineering, vol. 13, no. 1, pp. 97-121. https://doi.org/10.1007/s10664-007-9053-5

APA

Kitchenham, B. A., Al-Kilidar, H., Babar, M. A., Berry, M., Cox, K., Keung, J., Kurniawati, F., Staples, M., Zhang, H., & Zhu, L. (2008). Evaluating guidelines for reporting empirical software engineering studies. Empirical Software Engineering, 13(1), 97-121. https://doi.org/10.1007/s10664-007-9053-5

Vancouver

Kitchenham BA, Al-Kilidar H, Babar MA, Berry M, Cox K, Keung J et al. Evaluating guidelines for reporting empirical software engineering studies. Empirical Software Engineering. 2008 Feb;13(1):97-121. doi: 10.1007/s10664-007-9053-5

Author

Kitchenham, Barbara A. ; Al-Kilidar, Hiyam ; Babar, Muhammad Ali et al. / Evaluating guidelines for reporting empirical software engineering studies. In: Empirical Software Engineering. 2008 ; Vol. 13, No. 1. pp. 97-121.

Bibtex

@article{537360f3da3949c9a2bab6754b59aabb,
title = "Evaluating guidelines for reporting empirical software engineering studies",
abstract = "BackgroundSeveral researchers have criticized the standards of performing and reporting empirical studies in software engineering. In order to address this problem, Jedlitschka and Pfahl have produced reporting guidelines for controlled experiments in software engineering. They pointed out that their guidelines needed evaluation. We agree that guidelines need to be evaluated before they can be widely adopted.AimThe aim of this paper is to present the method we used to evaluate the guidelines and report the results of our evaluation exercise. We suggest our evaluation process may be of more general use if reporting guidelines for other types of empirical study are developed.MethodWe used a reading method inspired by perspective-based and checklist-based reviews to perform a theoretical evaluation of the guidelines. The perspectives used were: Researcher, Practitioner/Consultant, Meta-analyst, Replicator, Reviewer and Author. Apart from the Author perspective, the reviews were based on a set of questions derived by brainstorming. A separate review was performed for each perspective. The review using the Author perspective considered each section of the guidelines sequentially.ResultsThe reviews detected 44 issues where the guidelines would benefit from amendment or clarification and 8 defects.ConclusionsReporting guidelines need to specify what information goes into what section and avoid excessive duplication. The current guidelines need to be revised and then subjected to further theoretical and empirical validation. Perspective-based checklists are a useful validation method but the practitioner/consultant perspective presents difficulties.",
keywords = "Controlled experiments , Software engineering , Guidelines , Perspective-based reading , Checklist-based reviews",
author = "Kitchenham, {Barbara A.} and Hiyam Al-Kilidar and Babar, {Muhammad Ali} and Mike Berry and Karl Cox and Jacky Keung and Felicia Kurniawati and Mark Staples and He Zhang and Liming Zhu",
year = "2008",
month = feb,
doi = "10.1007/s10664-007-9053-5",
language = "English",
volume = "13",
pages = "97--121",
journal = "Empirical Software Engineering",
issn = "1382-3256",
publisher = "Springer Netherlands",
number = "1",

}

RIS

TY - JOUR

T1 - Evaluating guidelines for reporting empirical software engineering studies

AU - Kitchenham, Barbara A.

AU - Al-Kilidar, Hiyam

AU - Babar, Muhammad Ali

AU - Berry, Mike

AU - Cox, Karl

AU - Keung, Jacky

AU - Kurniawati, Felicia

AU - Staples, Mark

AU - Zhang, He

AU - Zhu, Liming

PY - 2008/2

Y1 - 2008/2

N2 - BackgroundSeveral researchers have criticized the standards of performing and reporting empirical studies in software engineering. In order to address this problem, Jedlitschka and Pfahl have produced reporting guidelines for controlled experiments in software engineering. They pointed out that their guidelines needed evaluation. We agree that guidelines need to be evaluated before they can be widely adopted.AimThe aim of this paper is to present the method we used to evaluate the guidelines and report the results of our evaluation exercise. We suggest our evaluation process may be of more general use if reporting guidelines for other types of empirical study are developed.MethodWe used a reading method inspired by perspective-based and checklist-based reviews to perform a theoretical evaluation of the guidelines. The perspectives used were: Researcher, Practitioner/Consultant, Meta-analyst, Replicator, Reviewer and Author. Apart from the Author perspective, the reviews were based on a set of questions derived by brainstorming. A separate review was performed for each perspective. The review using the Author perspective considered each section of the guidelines sequentially.ResultsThe reviews detected 44 issues where the guidelines would benefit from amendment or clarification and 8 defects.ConclusionsReporting guidelines need to specify what information goes into what section and avoid excessive duplication. The current guidelines need to be revised and then subjected to further theoretical and empirical validation. Perspective-based checklists are a useful validation method but the practitioner/consultant perspective presents difficulties.

AB - BackgroundSeveral researchers have criticized the standards of performing and reporting empirical studies in software engineering. In order to address this problem, Jedlitschka and Pfahl have produced reporting guidelines for controlled experiments in software engineering. They pointed out that their guidelines needed evaluation. We agree that guidelines need to be evaluated before they can be widely adopted.AimThe aim of this paper is to present the method we used to evaluate the guidelines and report the results of our evaluation exercise. We suggest our evaluation process may be of more general use if reporting guidelines for other types of empirical study are developed.MethodWe used a reading method inspired by perspective-based and checklist-based reviews to perform a theoretical evaluation of the guidelines. The perspectives used were: Researcher, Practitioner/Consultant, Meta-analyst, Replicator, Reviewer and Author. Apart from the Author perspective, the reviews were based on a set of questions derived by brainstorming. A separate review was performed for each perspective. The review using the Author perspective considered each section of the guidelines sequentially.ResultsThe reviews detected 44 issues where the guidelines would benefit from amendment or clarification and 8 defects.ConclusionsReporting guidelines need to specify what information goes into what section and avoid excessive duplication. The current guidelines need to be revised and then subjected to further theoretical and empirical validation. Perspective-based checklists are a useful validation method but the practitioner/consultant perspective presents difficulties.

KW - Controlled experiments

KW - Software engineering

KW - Guidelines

KW - Perspective-based reading

KW - Checklist-based reviews

UR - http://www.scopus.com/inward/record.url?scp=37649000875&partnerID=8YFLogxK

U2 - 10.1007/s10664-007-9053-5

DO - 10.1007/s10664-007-9053-5

M3 - Journal article

VL - 13

SP - 97

EP - 121

JO - Empirical Software Engineering

JF - Empirical Software Engineering

SN - 1382-3256

IS - 1

ER -