Home > Research > Publications & Outputs > A segmentally informed solution to automatic ac...

Associated organisational unit

Electronic data

  • segmental_solution_IJSLL_resub

    Accepted author manuscript, 571 KB, PDF document

    Embargo ends: 8/07/24

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

A segmentally informed solution to automatic accent classification and its advantages to forensic applications

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>8/07/2022
<mark>Journal</mark>International Journal of Speech, Language and the Law
Issue number2
Volume28
Number of pages32
Pages (from-to)201-232
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Traditionally, work in automatic accent recognition has followed a similar research trajectory to that of language identification, dialect identification and automatic speaker recognition. The same acoustic modelling approaches that have been implemented in speaker recognition (such as GMM-UBM and i-vector-based systems) have also been applied to automatic accent recognition. These approaches form models of speakers’ accents by taking acoustic features from right across the speech signal without knowledge of its phonetic content. Particularly for accent recognition, however, phonetic information is expected to add substantial value to the task. The current work presents an alternative modelling approach to automatic accent recognition, which forms models of speakers’ pronunciation systems using segmental information. This article claims that such an approach to the problem makes for a more explainable method and therefore is a more appropriate method to deploy in settings where it is important to be able to communicate methods, such as forensic applications. We discuss the issue of explainability and show how the system operates on a large 700-speaker dataset of non-native English conversational telephone recordings.