"Move the Couch Where?": Developing an Augmented Reality Multimodal Interface (2006)
Type of ContentConference Contributions - Published
PublisherUniversity of Canterbury. Human Interface Technology Laboratory.
AuthorsIrawati, S., Green, S., Billinghurst, M., Duenser, A., Ko, H.show all
This paper describes an augmented reality (AR) multimodal interface that uses speech and paddle gestures for interaction. The application allows users to intuitively arrange virtual furniture in a virtual room using a combination of speech and gestures from a real paddle. Unlike other multimodal AR applications, the multimodal fusion is based on the combination of time-based and semantic techniques to disambiguate a users speech and gesture input. We describe our AR multimodal interface architecture and discuss how the multimodal inputs are semantically integrated into a single interpretation by considering the input time stamps, the object properties, and the user context.
CitationIrawati, S., Green, S., Billinghurst, M., Duenser, A., Ko, H. (2006) "Move the Couch Where?": Developing an Augmented Reality Multimodal Interface. Santa Barbara, CA, USA: 5th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 06), 22-25 Oct 2006. Proceedings ISMAR International Symposium on Mixed and Augmented Reality, 183-186.
This citation is automatically generated and may be unreliable. Use as a guide only.