Multimodal Speech-Gesture Interaction with 3D Objects in Augmented Reality Environments (2010)

View/ Open
Type of Content
Theses / DissertationsThesis Discipline
Computer ScienceDegree Name
Doctor of PhilosophyPublisher
University of Canterbury. Department of Computer Science and Software EngineeringCollections
Abstract
Augmented Reality (AR) has the possibility of interacting with virtual objects and real objects at the same time since it combines the real world with computer-generated contents seamlessly. However, most AR interface research uses general Virtual Reality (VR) interaction techniques without modification. In this research we develop a multimodal interface (MMI) for AR with speech and 3D hand gesture input. We develop a multimodal signal fusion architecture based on the user behaviour while interacting with the MMI that provides more effective and natural multimodal signal fusion. Speech and 3D vision-based free hand gestures are used as multimodal input channels. There were two user observations (1) a Wizard of Oz study and (2)Gesture modelling. With the Wizard of Oz study, we observed user behaviours of interaction with our MMI. Gesture modelling was undertaken to explore whether different types of gestures can be described by pattern curves. Based on the experimental observations, we designed our own multimodal fusion architecture and developed an MMI. User evaluations have been conducted to evaluate the usability of our MMI. As a result, we found that MMI is more efficient and users are more satisfied with it when compared to the unimodal interfaces. We also describe design guidelines which were derived from our findings through the user studies.
Keywords
augmented reality; multimodal interface; natural hand gesture; gesture-speech input; multimodal fusionRights
Copyright Min Kyung LeeRelated items
Showing items related by title, author, creator and subject.
-
An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures
Irawati, S.; Green, S.; Billinghurst, Mark; Duenser, A.; Ko, H. (University of Canterbury. Human Interface Technology Laboratory., 2006)This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR ... -
Grasp-Shell vs Gesture-Speech: A Comparison of Direct and Indirect Natural Interaction Techniques in Augmented Reality
Piumsomboon, T.; Altimira, D.; Kim, H.; Clark, A.; Lee, G.; Billinghurst, Mark (University of Canterbury. Human Interface Technology Laboratory, 2014)In order for natural interaction in Augmented Reality (AR) to become widely adopted, the techniques used need to be shown to support precise interaction, and the gestures used proven to be easy to understand and perform. ... -
Freeze view touch and finger gesture based interaction methods for handheld augmented reality interfaces
Bai, H.; Lee, G.A.; Billinghurst, Mark (University of Canterbury. Human Interface Technology Laboratory, 2012)Interaction techniques for handheld mobile Augmented Reality (AR) often focus on device-centric methods based around touch input. However, users may not be able to easily interact with virtual objects in mobile AR scenes ...