Robot bullying. (2020)
Type of ContentTheses / Dissertations
Thesis DisciplineHuman Interface Technology
Degree NameDoctor of Philosophy
PublisherUniversity of Canterbury
AuthorsKeijsers, Merelshow all
When robots made their first unsupervised entrance to the public space, their engineers were confronted with an unexpected phenomenon: robot bullying (see for example Brscić, Kidokoro, Suehiro, & Kanda, 2015; Salvini et al., 2010). While the phenomenon has continued to manifest itself since and a few theoretical explanations have been suggested, little empirical work has been done to substantiate any theorising as of yet. This thesis summarizes five pieces of research that explore what psychological fac- tors influence people’s willingness to behave anti-socially towards robots. It is structured around four experiments on human-robot interaction (Chapters 2, 3, 5, and 6) and one analysis of human-chatbot interaction (Chapter 4). In addition, there are some general reflections on the methodological and philosophical issues with studying robot bullying (section 7.2), as well as the role of mind attribution (i.e., attributing the ability to think and feel to another being; section 7.4), which has been a recurring measure of interest throughout the experiments.
Chapter 1 provides an overview of the motivation for the thesis topic and the research questions. It also includes a general discussion of the relevant literature, focusing on an- thropomorphism of nonhuman agents, mind attribution as a factor of anthropomorphism, and how dehumanisation as a facilitator for interhuman aggression may be generalisable to human-robot interaction as well.
Chapter 2 describes an experiment that explored whether bullying behaviour is per- ceived as more morally acceptable if the victim is a robot rather than a human. The results indicated no significant difference in moral acceptability, and suggested that higher levels of mind attribution were related to lower acceptability of abuse.
Chapter 3 expands on these findings by describing two studies that experimentally manipulated mind attribution. Also, whereas participants in the experiment from Chap- ter 2 were passive spectators of a human-robot interaction, one of the experiments in this chapter involved active interaction between a participant and a robot. In two experiments we investigated the influence of a robot’s mind attribution on the perceived acceptability of robot bullying and people’s willingness to bully a robot. Results indicated that ac- ceptability of robot bullying can be manipulated both explicitly, by providing people with information on the robot’s mind attribution, and implicitly, through having the robot give off emotional cues. Those effects are independent of one another. Interestingly, robot mind attribution was not associated with a lower robot bullying incidence rate in this experiment.
In contrast to the studies reported in the other chapters, the study covered in Chapter 4 did not realise an experimental design. Almost 300 conversations between users and an online chatbot were harvested and coded for humanlikeness of the chatbot, self-disclosure by the user, and importantly, the amount of verbal abuse or sexual harassment. Subse- quent analyses showed that humanlikeness in the chatbot was associated with more abuse (both sexual harassment and verbal aggression). Self-disclosure in terms of making men- tion of one’s gender (both male and female) was associated with less verbal aggression, but more sexual harassment.
Chapter 5 describes an experiment which investigated whether mind attribution is linked to robot abuse. Mind attribution to the robot was intended to be manipulated through priming participants with a feeling of power, as previous studies on dehumani- sation had shown that power reduces mind attribution. In addition, humanlike qualities of the robot were manipulated. The participants’ verbal abuse of a virtual robot was measured as the main outcome of interest; mind attribution to the robot and humanlike- ness of the robot were measured as manipulation checks. Contrary to previous findings in human-human interaction, priming participants with power did not result in reduced mind attribution. However, evidence for dehumanisation was still found, as the less mind participants attributed to the robot, the more aggressive responses they gave. This effect was moderated by the power prime and robot humanlikeness manipulation.
The discussion section of Chapter 5 offers an explanation for the surprising results, which is put to the test in Chapter 6, where an expansion of the experiment from Chapter 5 is presented. Feelings of power, robot embodiment (virtual versus embodied) and feelings of threat were experimentally manipulated. Participants played a learning task with either a virtual or an embodied robot, and were asked to restrict the robot’s energy supply after each wrong answer, which was taken as a measure of aggression. Results indicated that an embodied robot was punished less harshly than a virtual one, except for when people had been primed with power and threat. Being primed with power diminished the influence of mind attribution on aggression. Mind attribution increased aggression in the threat condition, but was related to decreased aggression when people had not been reminded of threat. These results suggest that while mind attribution appears to play a role in robot bullying, the relationship is too complicated to be explained by dehumanisation theory alone.
Finally, Chapter 7 aggregates the results from the studies in this thesis to provide an answer to the thesis research questions. In addition, the strengths and limitations of the research are discussed. Furthermore, trends in mind attribution to the robots used in the different experiments are discussed. Finally, possible directions for future research are considered.