An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures (2006)
Type of ContentConference Contributions - Published
PublisherUniversity of Canterbury. Human Interface Technology Laboratory.
AuthorsIrawati, S., Green, S., Billinghurst, M., Duenser, A., Ko, H.show all
This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces.
CitationIrawati, S., Green, S., Billinghurst, M., Duenser, A., Ko, H. (2006) An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures. Hangzhou, China: 16th International Conference on Artificial Reality and Telexistence (ICAT 2006), 29 Nov-2 Dec 2006. Lecture Notes in Computer Science (LNCS), 4282, Advances in Artificial Reality and Tele-Existence, 272-283.
This citation is automatically generated and may be unreliable. Use as a guide only.