The Interface for Remote Parallel Experience Collaboration Using Augmented Visual Communication Cues (2016)
Type of ContentTheses / Dissertations
Thesis DisciplineHuman Interface Technology
Degree NameDoctor of Philosophy
PublisherUniversity of Canterbury
Today people use video conferencing systems to share their task space for many collaborative purposes. There is a body of research on adding visual communication cues in the shared task space, but this mostly focuses on remote expert collaboration in which communication is mostly one way communication from a remote expert to a local operator. This thesis explores the parallel experience collaboration where both users discuss their ideas via a video conferencing system that includes verbal and visual communication cues such as pointers or drawing annotations. The first study (Chapter 5), explores the use of two different visual communication cues (pointer and drawing annotation) and how these affect the parallel experience collaboration. The user study found that while both cues increased the level of connectedness, the participants preferred the pointer interface over the annotation interface. The second and third studies (Chapter 6) investigate solutions for the issues in the annotation interface. One such issue is that if the local user (who is sharing his or her task space) changes the viewpoint of a shared live video while the remote user is drawing an annotation, the annotation is drawn at the wrong place. In the second study, the author investigates freeze functions as a solution; comparing auto freeze and manual freeze conditions to the non-freeze condition. The user study revealed that the auto freeze function solved the issue without additional inputs neither losing the “liveness” of the shared live video. In the third study, the author investigated solutions for local users’ task management. The local users sometimes missed new annotations from the remote users while focused on their own task. As potential solutions, the author proposed the red box and the both freeze notifications and conducted a user study to compare these with the non-notification condition. The results found that the participants significantly preferred the red box notification, because it solved the problem without causing significant interruption. Since the independence of the remote user’s view from the local user’s view helps them have better remote expert collaboration, the fourth study (Chapter 7) explores the use of the independent view in parallel experience collaboration. As with remote expert collaboration, parallel experience collaboration participants preferred the independent view because it provided a stable view rather than moving view according to the local user’s head movement. In conclusion, through a series of user experiments the author found that the interface for drawing annotation in parallel experience collaboration would be better to 1) support quick and easy use of a drawing annotation interface, 2) include appropriate notification that helps a local user to know when a remote user is making an annotation and 3) support an independent view 4) with the function that a remote user can start drawing while toggling from the dependent view to independent views. During this Ph.D., several contributions have been achieved. The author introduces one of the earliest prototypes for anchoring drawings in the real world without any previously known tracking data or visual markers (described in Chapter 4); describes the use of visual communication cues: pointers and drawing annotation, for a parallel experience collaboration, and found four user states (collaborating together, playing in parallel, passive, and do-it-alone states) (in Chapter 5); suggests an auto freeze interface for easy and quick draw annotation and a notification interface without causing a significant level of interruption for local user’s task management (in Chapter 6); has found the benefit of independent view in parallel experience collaboration (in Chapter 7); and additionally, from the four user studies, introduces a communication model for parallel experience collaboration including verbal and visual communication cues.
RightsAll Right Reserved
Showing items related by title, author, creator and subject.
Gao L; Bai H; Piumsomboon T; Lee G; Lindeman RW; Billinghurst, Mark (2017)We present a prototype Mixed Reality (MR) system with a hybrid interface to support remote collaboration between a local worker and a remote expert in a large-scale work space. By combining a low-resolution 3D point-cloud ...
Kim, S.; Lee, G.; Sakata, N.; Vartiainen, E.; Billinghurst, Mark (University of Canterbury. Human Interface Technology Laboratory, 2013)In this research, we explore using pointing and drawing in a remote collaboration system. Our application allows a local user with a tablet to communicate with a remote expert on a desktop computer. We compared performance ...
Tait, M.; Tsai, T.; Sakata, N.; Billinghurst, Mark; Vartiainen, E. (University of Canterbury. Electrical and Computer EngineeringUniversity of Canterbury. Human Interface Technology Laboratory, 2013)This paper describes an AR system for remote collaboration using a captured 3D model of the local user’s scene. In the system a remote user can manipulate the scene independently of the view of the local user and add ...