Home > Research > Publications & Outputs > Variation in passing standards for graduation-l...

Electronic data

  • medical_education_manuscript_accepted_version_18oct16

    Rights statement: This is the peer reviewed version of the following article:Taylor, C. A., Gurnell, M., Melville, C. R., Kluth, D. C., Johnson, N. and Wass, V. (2017), Variation in passing standards for graduation-level knowledge items at UK medical schools. Med Educ, 51: 612–620. doi:10.1111/medu.13240 which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1111/medu.13240/abstract This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.

    Accepted author manuscript, 253 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Variation in passing standards for graduation-level knowledge items at UK Medical Schools

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Celia Taylor
  • Mark Gurnell
  • Colin Randolph Melville
  • David Kluth
  • Neil Johnson
  • Val Wass
Close
<mark>Journal publication date</mark>06/2017
<mark>Journal</mark>Medical Education
Issue number6
Volume51
Number of pages9
Pages (from-to)612-620
Publication StatusPublished
Early online date13/03/17
<mark>Original language</mark>English

Abstract

Objectives Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common ‘one from five’ single-best-answer (multiple-choice) items used in graduation-level applied knowledge examinations and explores potential reasons for any differences. Methods A repeated cross-sectional study was conducted. Participating schools were sent a common set of graduation-level items (55 in 2013–2014; 60 in 2014–2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard-setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. Results Of 31 invited medical schools, 22 participated in 2013–2014 (71%) and 30 (97%) in 2014–2015. Schools used a mean of 49 and 53 common items in 2013–2014 and 2014–2015, respectively, representing around one-third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f2): 0.041 in 2013–2014 and 0.218 in 2014–2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013–2014 and 6.5 percentage points in 2014–2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. Conclusions Independently set standards for common single-best-answer items used in graduation-level examinations vary across UK medical schools. Further work to examine standard-setting processes in more detail is needed to help explain this variability and develop methods to reduce it.

Bibliographic note

This is the peer reviewed version of the following article:Taylor, C. A., Gurnell, M., Melville, C. R., Kluth, D. C., Johnson, N. and Wass, V. (2017), Variation in passing standards for graduation-level knowledge items at UK medical schools. Med Educ, 51: 612–620. doi:10.1111/medu.13240 which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1111/medu.13240/abstract This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.