Learning effects in multimodal perception with real and simulated faces

Type of content
Conference Contributions - Published
Thesis discipline
Degree name
Publisher
Australian Speech Science and Technology Association Inc.
Journal Title
Journal ISSN
Volume Title
Language
Date
2019
Authors
Keough M
Derrick, Donald
Taylor RC
Gick B
Abstract

We have all learned to associate real voices with animated faces since childhood. Researchers use this association, employing virtual faces in audiovisual speech perception tasks. However, we do not know if perceivers treat those virtual faces the same as real faces, or if instead integration of speech cues from new virtual faces must be learned at the time of contact. We test this possibility using speech information that perceivers have never had a chance to associate with simulated faces – aerotactile somatosensation. With human faces, silent bilabial articulations (“ba” and “pa”), accompanied by synchronous cutaneous airflow, shift perceptual bias towards “pa”. If visual-tactile integration is unaffected by the visual stimuli’s ecological origin, results with virtual faces should be similar. Contra previous reports [8], our results show perceivers do treat computer-generated faces and human faces in a similar fashion - visually aligned cutaneous airflow shifts perceptual bias towards “pa” equally well with virtual and real faces.

Description
Citation
Keough M, Derrick D, Taylor RC, Gick B (2019). Learning effects in multimodal perception with real and simulated faces. Melbourne: International Congress of the Phonetic Sciences 2019. 05/08/2019-09/08/2019. Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019. 1189-1192.
Keywords
Speech Perception, Speech Acoustics, Multimodal Phonetics
Ngā upoko tukutuku/Māori subject headings
ANZSRC fields of research
Fields of Research::47 - Language, communication and culture::4704 - Linguistics
Rights