Learning effects in multimodal perception with real and simulated faces (2019)
Type of ContentConference Contributions - Published
PublisherAustralian Speech Science and Technology Association Inc.
We have all learned to associate real voices with animated faces since childhood. Researchers use this association, employing virtual faces in audiovisual speech perception tasks. However, we do not know if perceivers treat those virtual faces the same as real faces, or if instead integration of speech cues from new virtual faces must be learned at the time of contact. We test this possibility using speech information that perceivers have never had a chance to associate with simulated faces – aerotactile somatosensation. With human faces, silent bilabial articulations (“ba” and “pa”), accompanied by synchronous cutaneous airflow, shift perceptual bias towards “pa”. If visual-tactile integration is unaffected by the visual stimuli’s ecological origin, results with virtual faces should be similar. Contra previous reports , our results show perceivers do treat computer-generated faces and human faces in a similar fashion - visually aligned cutaneous airflow shifts perceptual bias towards “pa” equally well with virtual and real faces.
CitationKeough M, Derrick D, Taylor RC, Gick B (2019). Learning effects in multimodal perception with real and simulated faces. Melbourne: International Congress of the Phonetic Sciences 2019. 05/08/2019-09/08/2019. Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019. 1189-1192.
This citation is automatically generated and may be unreliable. Use as a guide only.
KeywordsSpeech Perception; Speech Acoustics; Multimodal Phonetics
ANZSRC Fields of Research47 - Language, communication and culture::4704 - Linguistics
Showing items related by title, author, creator and subject.
Visual-tactile integration in speech perception : evidence for modality neutral speech primitives. Bicevskis K; Derrick, Donald; Gick B (Acoustical Society of America (ASA), 2016)© 2016 Acoustical Society of America. Audio-visual [McGurk and MacDonald (1976). Nature 264, 746-748] and audio-tactile [Gick and Derrick (2009). Nature 462(7272), 502-504] speech stimuli enhance speech perception over ...
Gick B; Ikegami Y; Derrick, Donald (Acoustical Society of America (ASA), 2010)Asynchronous cross-modal information is integrated asymmetrically in audio-visual perception. To test whether this asymmetry generalizes across modalities, auditory (aspirated "pa" and unaspirated "ba" stops) and tactile ...
Three speech sounds, one motor action: Evidence for speech-motor disparity from English flap production Derrick, Donald; Stavness, I.; Gick, B. (University of Canterbury. New Zealand Institute of Language, Brain & Behaviour, 2015)The assumption that units of speech production bear a one-to-one relationship to speech motor actions pervades otherwise widely varying theories of speech motor behavior. This speech produc- tion and simulation study ...