A long-standing debate in the testing of listening concerns the authenticity of the listening input. On the one hand, listening texts produced by item-writers often lack spoken language characteristics. On the other hand, real-life recordings are often too context-specific to stand alone, or not suitable for item generation. In this study, we explored the effectiveness of an existing item-writing training course to produce authentic-sounding listening texts within the constraints of test specifications. Twenty-five trainees took an online item-writing course including training on creating authentic-sounding listening texts. Prior to and after the course, they developed a listening task. The resulting listening texts were judged on authenticity by three professional item reviewers and analysed linguistically by the researchers. Additionally, we interviewed the trainees following each item-writing event and analysed their online discussions from during the course. Statistical comparison of the pre-and post-course authenticity scores revealed a positive effect of the training on item-writers’ ability to produce authentic-sounding listening texts, while the linguistic analysis demonstrated that the texts produced after the training contained more instances of spoken language. The interviews and discussions revealed that item-writers’ awareness of spoken language features and their text production techniques influenced their ability to develop authentic-sounding texts.
This is an Accepted Manuscript of an article published by Taylor & Francis in Language Assessment Quarterly on 08/03/2021, available online: https://www.tandfonline.com/doi/abs/10.1080/15434303.2021.1895162