Home > Research > Publications & Outputs > How well does objective examination of professi...
View graph of relations

How well does objective examination of professionalism discriminate medical students practice commensurate with regulatory expectations?

Research output: Contribution to conference - Without ISBN/ISSN Poster

Publication date2018
<mark>Original language</mark>English
EventAssociation for the Study of Medical Education: Annual Scientific Meeting - The Sage, Gateshead, United Kingdom
Duration: 11/07/201813/07/2018


ConferenceAssociation for the Study of Medical Education
Abbreviated titleASME
CountryUnited Kingdom


The development and attainment of professional identity includes the need to meet the standards of the professional regulator, in the UK this is the General Medical Council’s Good Medical Practice (GMP)1. Whilst it can be challenging to integrate this reality of medical practice into curricular and assessment methodologies2 various multi-dimensional, multiparadigmatic approaches to assess professionalism have been described3,4 including simulated situations5.
Within an Objective Structured Clinical Examination (OSCE) we aimed to design and assess the performance of a station using GMP as the assessment framework for professionalism6. Its developmental emphasis was formative, with verbal and written feedback and access to learning resources.
Using an iterative approach, we constructed an OSCE station with student and actor as foundation year 1 doctors, where a medical peer appears not fit for work due to health problems. The station assessed knowledge, skills and attitude, including the duty to raise appropriate concern about a colleague, inter-professional communication and protecting patients.
After the 2016 iteration, performance data, examiner and student feedback were used to improve the assessment for 2017. Subsequent examiner feedback and a student focus group were used to understand the station’s educational impact. Performance data were analysed using regression analysis, measuring reliability with Cronbach’s alpha.
Examiners and students felt the station worked well but students confused a private conversation with a colleague with the duty of confidentiality for patients, as well as the need to act promptly. Students were uncertain as to whom they should escalate concerns. Within focus groups students spoke in terms of what they “should have” done suggesting the emergence of greater insight. However, only one looked at GMP after the assessment. In 2016 examiners found the binary assessment ‘check-list’ failed to discriminate students with partial insight from those with none, thus in 2017 a qualitative global rating scale was used.
In 2016 station fail rate was 34%, R2 was 0.86 with 56% variance related to the circuit and 0.02 Alpha difference on deletion. Whereas in 2017 the fail rate was 24%, R2 was 0.80 with 20% variance related to the circuit and -0.06 Alpha difference on deletion. The inter-circuit variation in the 2016 iteration although not ideal is difficult to interpret as there were only small numbers of students; a maximum of 6 per circuit and 8 circuits in total. The professionalism station contributed to overall OSCE reliability in 2016 as Cronbach’s alpha went down on deletion but not in 2017.
The correlation between marks and overall rating of performance was very good for both iterations, slightly stronger for the 2016 ‘check list’ mark scheme. Discrimination was also good for both stations, again better in 2016, 14.6% and 10.6% respectively. Discrimination between borderline and clearly passing students was less clear with a global rating scale but clearly failing or very good students were distinct. Examiner narrative supported the 2017 iteration’s ability to identify students who had clearly recognized the fundamental actions required.
The OSCE model can use a professional framework, in this case GMP, in the assessment of professionalism in the context of a situational judgement with reasonable discrimination of student performance.
OSCE station design and marking should to be linked to students’ prior contextual experience where global ratings and examiner narrative appear to better assess performance. However a binary mark scheme discriminates well, but when used performance expectations need to reflect learner context. Although more work is needed to understand the impact of this approach on professional practice, graduates may benefit from revisiting these scenarios as they may feel more
‘real’ or relevant which may help to reinforce what was previously learned.
1. GMC (2013), Good Medical Practice.
2. O’Sulivan, H. et al (2012) Integrating professionalism into the curriculum: AMEE Guide No.61, Medical Teacher, 34:2, e64-e77
3. Hodges, B. et al (2011) Assessment of professionalism: Recommendations from the Ottawa 2010 Conference, Medical Teacher, 33:5, 354-363.
4. Goldie, J. (2013) Assessment of professionalism: A consolidation of current thinking, Medical Teacher, 35:2, e952-e956
5. van Zanten, M. et al (2005) Using a standardised patient assessment to measure professional attributes, Medical Education, 39: 20-29.
6. GMC (2016) Achieving good medical practice: guidance for medical students.