Does phonetic bias plus spoken interaction add up to sound change? - Speaker: Felicitas Kleber
Sound changes occur at a daily basis – be it in form of occasional categorical misproductions and misperceptions or gradual distributional shifts in the acoustic-auditory space that – following exemplar theory (Pierrehumbert 2016, Annual review of linguistics 2) – shape our sound systems. Such mini sound changes at the individual level often go unnoticed, but – depending on the frequency – can result in maxi (or diachronic) sound changes at the level of the speech community. According to the interactive-phonetic model of sound change (Harrington et al. 2018, Topic in Cognitive Sciences 10) the likelihood for the spread of group level sound changes increases with pronounced phonetic biases in the acoustic-auditory space towards new variants that are amplified trough spontaneous imitation below the level of consciousness in spoken interaction, provided weakened perceptual filter mechanisms. In this talk I’ll introduce important concepts (phonetic biases, perceptual filter) and discuss on the bases of experimental findings and computational modelling the role of phonetic and other biases in speech production and perception in sound change.