Reciprocity in human robot interaction.
Type of content
Reciprocity is a basic characteristic of Human-Human Interaction (HHI). However, there have not been many previous studies about reciprocity in Human-Robot Interaction (HRI). The imminent coming of social robots interacting with users in their daily spaces has encouraged researchers in HRI to describe these new relations between humans and robots in terms of reciprocity, persuasion, likeability, and trust. Consequently, these studies could have an impact on the design of new social robots. The development of this thesis considers three main research questions:
- To what extent do humans reciprocate towards robots?
- To what extent can robots use reciprocation for their own benefit?
- What are the most beneficial and preferred reciprocal strategies between humans and robots? I used Game Theory to develop three experimental studies. Decision games such as Prisoner’s Dilemma, Ultimatum Game, Repeated Ultimatum Game and Rock, Paper, Scissors were used in the experiments. These games offer an engaging interaction between the participants and the robots, and they allow measuring of the variables related to reciprocity in HRI. The operationalisation of the studies was done under the definition of Reciprocity proposed by Fehr and Gaechter  which explains: in response to friendly actions, people are frequently much nicer and much more cooperative than predicted by the self-interest model; conversely, in response to hostile actions they are frequently much more nasty and even brutal. In addition to this, in the first and third study the "tit for tat" strategy was used with different modifications since it is a well-studied reciprocal strategy tested in previous experiments. Our main goal was to measure to what extent the Norm of Reciprocity,"to those who help us, we should return help, not harm" proposed by Gouldner  applies to Human-Robot Interaction in all the studies. In the first study, we investigated whether reciprocal behaviours exist in Human-Robot Interaction and to what extent people reciprocate towards robots compared with humans. We designed an experiment that required participants to play the Prisoner’s Dilemma game and Ultimatum Game with a NAO robot. We measured the number of reciprocations and collaborations between Humans and Robots and compared these with Human-Human Interactions. In the second study, we investigated the negative side of the reciprocal phenomena in HRI to explore whether robots could use the natural human reciprocal response for their own benefit. In this study, we tried to answer questions such as: Can a robot bribe a human? In the third study, we analysed the participants’ preferences of the reciprocal robotic strategies. Since robots have identical physical embodiment, the design of appropriate robotbehaviours is very important as reciprocity plays a main role in the interaction between humans and robots. Our general research question in this study is: What type of robot behaviour is preferred by humans when the robot’s decisions affect them? On one hand people tend to conform to the Norm of Reciprocity in Human-Robot Interaction as they do with Human-Human Interaction but to a lesser extent; while on the other, humans find the unpredictable behaviour of the briber robots likeable and don’t judge them in moral terms. They do, however, tend to reciprocate less towards robots who try to take advantage of the situation or show an unpredictable behaviour than with a robot that shows honest forward reciprocal behaviour. Furthermore, people prefer the most reciprocal and altruistic strategies of the robots compared with the selfish and most unpredictable reciprocal strategies. In other words, the construct of fairness in the form of reciprocity is present in HRI. In the future, once the robots have achieved an acceptable level of social skills our studies could be used as guidelines by robot behavioural designers.