Multimodal Speech-Gesture Interaction with 3D Objects in Augmented Reality Environments

dc.contributor.authorLee, Minkyung
dc.date.accessioned2010-07-11T22:34:43Z
dc.date.available2010-07-11T22:34:43Z
dc.date.issued2010en
dc.description.abstractAugmented Reality (AR) has the possibility of interacting with virtual objects and real objects at the same time since it combines the real world with computer-generated contents seamlessly. However, most AR interface research uses general Virtual Reality (VR) interaction techniques without modification. In this research we develop a multimodal interface (MMI) for AR with speech and 3D hand gesture input. We develop a multimodal signal fusion architecture based on the user behaviour while interacting with the MMI that provides more effective and natural multimodal signal fusion. Speech and 3D vision-based free hand gestures are used as multimodal input channels. There were two user observations (1) a Wizard of Oz study and (2)Gesture modelling. With the Wizard of Oz study, we observed user behaviours of interaction with our MMI. Gesture modelling was undertaken to explore whether different types of gestures can be described by pattern curves. Based on the experimental observations, we designed our own multimodal fusion architecture and developed an MMI. User evaluations have been conducted to evaluate the usability of our MMI. As a result, we found that MMI is more efficient and users are more satisfied with it when compared to the unimodal interfaces. We also describe design guidelines which were derived from our findings through the user studies.en
dc.identifier.urihttp://hdl.handle.net/10092/4094
dc.identifier.urihttp://dx.doi.org/10.26021/2223
dc.language.isoen
dc.publisherUniversity of Canterbury. Department of Computer Science and Software Engineeringen
dc.relation.isreferencedbyNZCUen
dc.rightsCopyright Min Kyung Leeen
dc.rights.urihttps://canterbury.libguides.com/rights/thesesen
dc.subjectaugmented realityen
dc.subjectmultimodal interfaceen
dc.subjectnatural hand gestureen
dc.subjectgesture-speech inputen
dc.subjectmultimodal fusionen
dc.titleMultimodal Speech-Gesture Interaction with 3D Objects in Augmented Reality Environmentsen
dc.typeTheses / Dissertations
thesis.degree.disciplineComputer Scienceen
thesis.degree.grantorUniversity of Canterburyen
thesis.degree.levelDoctoralen
thesis.degree.nameDoctor of Philosophyen
uc.bibnumber1473413en
uc.collegeFaculty of Engineeringen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
thesis_fulltext.pdf
Size:
1.66 MB
Format:
Adobe Portable Document Format