Home > Research > Publications & Outputs > A dynamic neural field model of vowel diphthong...

Associated organisational unit

Electronic data

  • ISSP2024_kirkham_strycharczuk

    Final published version, 9.6 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

View graph of relations

A dynamic neural field model of vowel diphthongisation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paper

Published
Close
Publication date13/05/2024
Host publicationProceedings of the 13th International Seminar on Speech Production
EditorsCécile Fougeron, Pascal Perrier
PublisherISSP
Pages193-196
Number of pages4
<mark>Original language</mark>English
EventISSP 2024 : 13th International Seminar on Speech Production - Autrans, France
Duration: 13/05/202417/05/2024
Conference number: 13th
https://issp24.inviteo.fr/

Symposium

SymposiumISSP 2024 : 13th International Seminar on Speech Production
Country/TerritoryFrance
CityAutrans
Period13/05/2417/05/24
Internet address

Symposium

SymposiumISSP 2024 : 13th International Seminar on Speech Production
Country/TerritoryFrance
CityAutrans
Period13/05/2417/05/24
Internet address

Abstract

We advance a computational model of vowel diphthongisation that situates phonological representations in dynamic neural fields (DNFs), which represent the time-varying activation of neural populations that are sensitive to a given phonetic parameter range. We model all long vowels as two separate inputs to the DNF, with input timing governed by a coupled oscillator model that generates an anti-phase relationship between inputs. The location of time-varying maximum activation in the DNF forms a noisy dynamic target, which is used as input to a task dynamic model of gestural coordination. We find that spatial characteristics of long vowels are well captured by the model, which exhibits gradient variation between monophthongs and diphthongs. We also show that a simplified model of production/perception can simulate changes in a speaker’s phonological planning representations, which could represent a mechnnism behind sound change if transmitted across a community.