UC Home > Library >
UC Research Repository >
College of Engineering >
Theses and Dissertations >
Please use this identifier to cite or link to this item:
|Title: ||A Holistic Design Concept For Eyes-Free Mobile Interfaces|
|Authors: ||Dicke, Christina|
|Issue Date: ||2012|
|Abstract: ||This thesis presents a series of studies to explore and understand the design of eyes-free interfaces for mobile devices. The motivation is to devise a holistic design concept that is based on the WIMP paradigm and is adapted to the requirements of mobile user interaction. It is proposed that audio is a very efficient and effective modality for use in an eyes-free mobile interface. Methods to transfer the WIMP paradigm to eyes-free interfaces are proposed and evaluated. Guidelines for the implementation of the paradigm are given and – by means of an example – a holistic design concept is proposed.
This thesis begins with an introduction to and critical reflection of re- currently important themes and research methods from the disciplines of psychoacoustics, psychology, and presence research. An overview of related work is given, paying particular attention to the use of interface metaphors in mobile eyes-free interfaces. The notion of distance is discussed as a method to prioritise, structure, and manage attention in eyes-free interfaces. Practical issues arising from sources becoming inaudible with increasing distance can be addressed by proposing a method modeled on echo location. This method was compared to verbally coded distance information and proved useful for identifying the closest of several objects, while verbally coded distance infor- mation was found to be more efficient for identifying the precise distance of an object. The knowledge gained from the study can contribute to improv- ing other applications, such as GPS based navigation. Furthermore, the issue of gaining an overview of accessible objects by means of sound was exam- ined. The results showed that a minimum of 200 ms between adjacent sound samples should be adhered to. Based on these findings, both earcons and synthesized speech are recommendable, although speech has the advantage of being more flexible and easier to learn. Monophonic reproduction yields comparable results to spatial reproduction. However, spatial reproduction has the additional benefit of indicating an item’s position. These results are transferable and generally relevant for the use of audio in HCI.
Tactile interaction techniques were explored as a means to interact with an auditory interface and were found to be both effective and enjoyable. One of the more general observations was that 2D and 3D gestures were intuitively used by participants, who transferred their knowledge of established gestures to auditory interfaces. It was also found that participants often used 2D ges- tures to select an item and proceeded to manipulate it with a 3D gesture. The results suggest the use of a small gesture set with reversible gestures for do/undo-type actions, which was further explored in a follow up study. It could be shown that simple 3D gestures are a viable way of manipulating spatialized sound sources in a complex 3D auditory display.
While the main contribution of this thesis lies in the area of HCI, pre- viously unresearched issues from adjacent disciplines that impact the user experience of auditory interfaces have been addressed. It was found that regular, predictable movement patterns in 3D audio spaces cause symptoms of simulator sickness. However, these were found to be minor and only oc- curred under extreme conditions. Additionally, the influence of the audio reproduction method on the perception of presence, social presence, and realism was examined. It was found that both stereophonic and binaural reproduction have advantages over monophonic sound reproduction: stereo- phonic sound increases the perception of social presence while binaural sound increases the feeling of being present in a virtual environment. The results are important contributions insofar as one of the main applications of mobile devices is voice based communication; it is reasonable to assume that there will be an increase in real-time voice based social and cooperative networking applications.
This thesis concludes with a conceptual design of a system called “Foogue”, which uses the results of the previous experiments as the basis of an eyes-free interface that utilizes spatial audio and gesture input.|
|Publisher: ||University of Canterbury. Computer Science and Software Engineering|
|Degree: ||Doctor of Philosophy|
|Rights: ||Copyright Christina Dicke|
|Rights URI: ||http://library.canterbury.ac.nz/thesis/etheses_copyright.shtml|
|Appears in Collections:||Theses and Dissertations|
Items in UC Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.