Learning effects in multimodal perception with real and simulated faces
Type of content
UC permalink
Publisher's DOI/URI
Thesis discipline
Degree name
Publisher
Journal Title
Journal ISSN
Volume Title
Language
Date
Authors
Abstract
We have all learned to associate real voices with animated faces since childhood. Researchers use this association, employing virtual faces in audiovisual speech perception tasks. However, we do not know if perceivers treat those virtual faces the same as real faces, or if instead integration of speech cues from new virtual faces must be learned at the time of contact. We test this possibility using speech information that perceivers have never had a chance to associate with simulated faces – aerotactile somatosensation. With human faces, silent bilabial articulations (“ba” and “pa”), accompanied by synchronous cutaneous airflow, shift perceptual bias towards “pa”. If visual-tactile integration is unaffected by the visual stimuli’s ecological origin, results with virtual faces should be similar. Contra previous reports [8], our results show perceivers do treat computer-generated faces and human faces in a similar fashion - visually aligned cutaneous airflow shifts perceptual bias towards “pa” equally well with virtual and real faces.