Rights statement: This is the peer reviewed version of the following article:Taylor, C. A., Gurnell, M., Melville, C. R., Kluth, D. C., Johnson, N. and Wass, V. (2017), Variation in passing standards for graduation-level knowledge items at UK medical schools. Med Educ, 51: 612–620. doi:10.1111/medu.13240 which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1111/medu.13240/abstract This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.
Accepted author manuscript, 253 KB, PDF document
Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Variation in passing standards for graduation-level knowledge items at UK Medical Schools
AU - Taylor, Celia
AU - Gurnell, Mark
AU - Melville, Colin Randolph
AU - Kluth, David
AU - Johnson, Neil
AU - Wass, Val
N1 - This is the peer reviewed version of the following article:Taylor, C. A., Gurnell, M., Melville, C. R., Kluth, D. C., Johnson, N. and Wass, V. (2017), Variation in passing standards for graduation-level knowledge items at UK medical schools. Med Educ, 51: 612–620. doi:10.1111/medu.13240 which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1111/medu.13240/abstract This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.
PY - 2017/6
Y1 - 2017/6
N2 - Objectives Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common ‘one from five’ single-best-answer (multiple-choice) items used in graduation-level applied knowledge examinations and explores potential reasons for any differences. Methods A repeated cross-sectional study was conducted. Participating schools were sent a common set of graduation-level items (55 in 2013–2014; 60 in 2014–2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard-setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. Results Of 31 invited medical schools, 22 participated in 2013–2014 (71%) and 30 (97%) in 2014–2015. Schools used a mean of 49 and 53 common items in 2013–2014 and 2014–2015, respectively, representing around one-third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f2): 0.041 in 2013–2014 and 0.218 in 2014–2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013–2014 and 6.5 percentage points in 2014–2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. Conclusions Independently set standards for common single-best-answer items used in graduation-level examinations vary across UK medical schools. Further work to examine standard-setting processes in more detail is needed to help explain this variability and develop methods to reduce it.
AB - Objectives Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common ‘one from five’ single-best-answer (multiple-choice) items used in graduation-level applied knowledge examinations and explores potential reasons for any differences. Methods A repeated cross-sectional study was conducted. Participating schools were sent a common set of graduation-level items (55 in 2013–2014; 60 in 2014–2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard-setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. Results Of 31 invited medical schools, 22 participated in 2013–2014 (71%) and 30 (97%) in 2014–2015. Schools used a mean of 49 and 53 common items in 2013–2014 and 2014–2015, respectively, representing around one-third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f2): 0.041 in 2013–2014 and 0.218 in 2014–2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013–2014 and 6.5 percentage points in 2014–2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. Conclusions Independently set standards for common single-best-answer items used in graduation-level examinations vary across UK medical schools. Further work to examine standard-setting processes in more detail is needed to help explain this variability and develop methods to reduce it.
U2 - 10.1111/medu.13240
DO - 10.1111/medu.13240
M3 - Journal article
VL - 51
SP - 612
EP - 620
JO - Medical Education
JF - Medical Education
SN - 0308-0110
IS - 6
ER -