Did personally relevant robotic failures affect human perception?

author avatar

24 Feb, 2022

The robotic-human interaction experimentation– Laundry Sorting Workstation [Image Credit: Research Paper]

The robotic-human interaction experimentation– Laundry Sorting Workstation [Image Credit: Research Paper]

Impact of personally relevant robotic failures (PeRFs) which would reduce trust in robots and robots’ likeability and willingness to use more than failures that are not personal to the user.

As technological advances come through, humans and robots are working more closely together to increase the productivity of industries and the quality of manufactured products, resulting in improved efficiency and growth. The research interest is to design humanoid robots that can become co-workers more than mere tools. From industrial robots to domestic and service robots, roboticists and computer engineers are witnessing the need for human-robot collaboration. For deploying robotic systems to work alongside human resources, it is important for humans to trust the abilities of their functionalities and willingness to use the robot as a co-worker. 

Human-robot interaction has been widely studied in the past decade, evaluating robotic failures and varying severity in terms of impactful consequences. However, the existing research evaluation was restricted to artificial settings which lacked real-life external parameters, resulting in an inaccurate response. Personal relevance (PeR), defined as the level of involvement with an object (in this case, Robot) has been impacting the user preferences and perceptions of the robots. Celsi and Olson found that PeR affects human perception [1] which is why a group of scientists from the Ben-Gurion University of the Negev, in Israel [2] speculated that this will affect robotic failures’ perception. 

The Evaluation

In the research article, “Is it personal? The impact of personally relevant robotic failures (PeRFs) on humans' trust, likeability, and willingness to use the robot,” the team carried out a study to evaluate the factors that affect how humans trust the robots– without affecting the productivity. The paper primarily focuses on the impact of personally relevant robotic failures (PeRFs) which would reduce the trust in robots and robots’ likeability and willingness to use more than failures that are not personal to the user. 

While the existing research failed to evaluate the factors in real-life scenarios, the team carried out a series of three experiments in the experimental environment, designed to simulate a realistic laundry sorting setup. To investigate the role of PeR in human-robot interaction, the three different laboratory experiments included one with damaging to property, the second with causing financial losses, followed by the first-person versus third-person failure scenarios. For the collaborative task of laundry sorting, a total of 132 participants engaged with the robot with the results indicating the impact of PeRFs on perceptions of the robot differed across the experimental setups. 

Summary of Experiment 1 to check user preference after robotic failuresExperiment 1: The interaction between the level of personal relevance and the session in terms of trust and willingness to use. [Image Credit: Research Paper]


Specifically, in the first experiment, the team aims to evaluate the impact of a failure that may cause damage to participants and examine the interaction between PeR and perceived failure severity. The severity differed from being high when the robot dropped a clothing object into the trash can instead of the laundry bin while the severity dropped when the robot dropped the clothing item onto the floor. The second experiment of financial loss due to delays caused by robotic failures reported that the failures with PeR can negatively impact humans’ trust in robots. The results from experiment two suggest that the perceived trust before failure was more than trust after failure, indicating that the failures decreased the user’s trust. 

Finally, in the last experimental setup, the failures were wrong gender identification of the participant or the experimenter. The conversation between the robot and the participant was in Hebrew, a gendered language that addresses the other person specifying the person’s gender. In high PeR, the robot incorrectly identified the participant’s gender while the lower PeR suggested that the robot incorrectly identified the experimenter’s gender. Surprising in this result, there was no difference between trust before failure and trust after failure, while no difference between LWtU before failure and after failure. This indicated that the manipulation of the interaction PeRF had no impact on the participant. 

The findings of the research article indicate that the robot’s failure changed human perception and impacted the extent to which users are ready to trust a robot for its designed functionality. The willingness to use had been affected by the robot’s failure which sparked an interesting observation fueling future studies. This research article was published on Cornell University’s research sharing platform, arXiv under open access terms.   

Reference

[1] R. L. Celsi and J. C. Olson, “The Role of Involvement in Attention and Comprehension Processes,” JOURNAL OF CONSUMER RESEARCH, vol. 15, pp. 210–224, 1988.

[2] Romi Gideoni, Shanee Honig, Tal Oron-Gilad: Is it personal? The impact of personally relevant robotic failures (PeRFs) on humans' trust, likeability, and willingness to use the robot. DOI arXiv:220105322 [cs.RO]

More by Abhishek Jadhav

Abhishek Jadhav is an engineering student, RISC-V ambassador and a freelance technology and science writer with bylines at EdgeIR, Electromaker, Embedded Computing Design, Electronics-Lab, Hackster, and Electronics-Lab.