Dogs Obey Social Robots

The field of animal-machine interaction is gaining new research topics with social robots. Meiying Qin from Yale University and her co-authors have brought together a Nao and a dog. From the abstract of their paper: “In two experiments, we investigate whether dogs respond to a social robot after the robot called their names, and whether dogs follow the ‘sit’ commands given by the robot. We conducted a between-subjects study (n = 34) to compare dogs’ reactions to a social robot with a loudspeaker. Results indicate that dogs gazed at the robot more often after the robot called their names than after the loudspeaker called their names. Dogs followed the ‘sit’ commands more often given by the robot than given by the loudspeaker. The contribution of this study is that it is the first study to provide preliminary evidence that 1) dogs showed positive behaviors to social robots and that 2) social robots could influence dog’s behaviors. This study enhance the understanding of the nature of the social interactions between humans and social robots from the evolutionary approach. Possible explanations for the observed behavior might point toward dogs perceiving robots as agents, the embodiment of the robot creating pressure for socialized responses, or the multimodal (i.e., verbal and visual) cues provided by the robot being more attractive than our control condition.” (Abstract) You can read the full paper via dl.acm.org/doi/abs/10.1145/3371382.3380734.

Imitating the Agile Locomotion Skills of Four-legged Animals

Imitating the agile locomotion skills of animals has been a longstanding challenge in robotics. Manually-designed controllers have been able to reproduce many complex behaviors, but building such controllers is time-consuming and difficult. According to Xue Bin Peng (Google Research and University of California, Berkeley) and his co-authors, reinforcement learning provides an interesting alternative for automating the manual effort involved in the development of controllers. In their work, they present “an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals” (Xue Bin Peng et al. 2020). They show “that by leveraging reference motion data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire behaviors for legged robots” (Xue Bin Peng et al. 2020). By incorporating sample efficient domain adaptation techniques into the training process, their system “is able to learn adaptive policies in simulation that can then be quickly adapted for real-world deployment” (Xue Bin Peng et al. 2020). For demonstration purposes, the scientists trained “a quadruped robot to perform a variety of agile behaviors ranging from different locomotion gaits to dynamic hops and turns” (Xue Bin Peng et al. 2020).