Investigating cooperation with robotic peers


Autoři: Debora Zanatto aff001;  Massimiliano Patacchiola aff001;  Jeremy Goslin aff002;  Angelo Cangelosi aff003
Působiště autorů: School of Computing, Electronics, and Mathematics, University of Plymouth, Plymouth, United Kingdom aff001;  School of Psychology, University of Plymouth, Plymouth, United Kingdom aff002;  School of Computer Science, University of Manchester, Manchester, United Kingdom aff003
Vyšlo v časopise: PLoS ONE 14(11)
Kategorie: Research Article
doi: 10.1371/journal.pone.0225028

Souhrn

We explored how people establish cooperation with robotic peers, by giving participants the chance to choose whether to cooperate or not with a more/less selfish robot, as well as a more or less interactive, in a more or less critical environment. We measured the participants' tendency to cooperate with the robot as well as their perception of anthropomorphism, trust and credibility through questionnaires. We found that cooperation in Human-Robot Interaction (HRI) follows the same rule of Human-Human Interaction (HHI), participants rewarded cooperation with cooperation, and punished selfishness with selfishness. We also discovered two specific robotic profiles capable of increasing cooperation, related to the payoff. A mute and non-interactive robot is preferred with a high payoff, while participants preferred a more human-behaving robot in conditions of low payoff. Taken together, these results suggest that proper cooperation in HRI is possible but is related to the complexity of the task.

Klíčová slova:

Behavior – Decision making – Games – Money supply and banking – Questionnaires – Robotic behavior – Robotics – Robots


Zdroje

1. Boyd R, Richerson PJ. Culture and the evolution of human cooperation. Philosophical Transactions of the Royal Society of London B: Biological Sciences. 2009;364(1533):3281–3288. doi: 10.1098/rstb.2009.0134 19805434

2. Sanfey AG. Social decision-making: insights from game theory and neuroscience. Science. 2007;318(5850):598–602. doi: 10.1126/science.1142996 17962552

3. Sebanz N, Bekkering H, Knoblich G. Joint action: bodies and minds moving together. Trends in cognitive sciences. 2006;10(2):70–76. doi: 10.1016/j.tics.2005.12.009 16406326

4. Oskamp S. Effects of programmed strategies on cooperation in the Prisoner's Dilemma and other mixed-motive games. Journal of Conflict Resolution. 1971;15(2):225–259.

5. Perugini M, Gallucci M, Presaghi F, Ercolani AP. The personal norm of reciprocity. European Journal of Personality. 2003;17(4):251–283.

6. Duffy BR. Anthropomorphism and the social robot. Robotics and Autonomous Systems. 2003;42(3–4):177–190.

7. Fong T, Nourbakhsh I, Dautenhahn K. A survey of socially interactive robots. Robotics and autonomous systems. 2003;42(3–4):143–166.

8. Walters ML, Syrdal DS, Dautenhahn K, Te Boekhorst R, Koay KL. Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. Autonomous Robots. 2008;24(2):159–178.

9. Gaudiello I, Zibetti E, Lefort S, Chetouani M, Ivaldi S. Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior. 2016;61:633–655.

10. Eyssel F. An experimental psychological perspective on social robotics. Robotics and Autonomous Systems. 2017;87:363–371.

11. Ma LM, Fong T, Micire MJ, Kim YK, Feigh K. Human-Robot Teaming: Concepts and Components for Design. In: Field and Service Robotics. Springer; 2018. p. 649–663.

12. Deutsch M. Trust and suspicion. Journal of conflict resolution. 1958;2(4):265–279.

13. Pilisuk M, Skolnick P, Overstreet E. Predicting cooperation from the two sexes in a conflict simulation. Journal of personality and social psychology. 1968;10(1):35. doi: 10.1037/h0026310 5682522

14. Steidle A, Hanke EV, Werth L. In the dark we cooperate: The situated nature of procedural embodiment. Social Cognition. 2013;31(2):275–300.

15. G•achter S, Herrmann B. Reciprocity, culture and human cooperation: previous insights and a new cross-cultural experiment. Philosophical Transactions of the Royal Society of London B: Biological Sciences. 2009;364(1518):791–806. doi: 10.1098/rstb.2008.0275 19073476

16. Fehr E G•achter S. Altruistic punishment in humans. Nature. 2002;415:137 EP.

17. Wu J, Paeng E, Linder K, Valdesolo P, Boerkoel JC. Trust and Cooperation in Human-Robot Decision Making. In: 2016 AAAI Fall Symposium Series; 2016. p. 110–116.

18. Fink J. Anthropomorphism and human likeness in the design of robots and human-robot interaction. In: International Conference on Social Robotics. Springer; 2012. p. 199–208.

19. Mutlu B, Shiwa T, Kanda T, Ishiguro H, Hagita N. Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. ACM; 2009. p. 61–68.

20. Staudte M, Crocker MW. Investigating joint attention mechanisms through spoken human-robot interaction. Cognition. 2011;120(2):268–291. doi: 10.1016/j.cognition.2011.05.005 21665198

21. Torre I, Goslin J, White L, Zanatto D. Trust in artificial voices: A congruency effect of first impressions and behavioural experience. In: Proceedings of the Technology, Mind, and Society. ACM; 2018. p. 40.

22. Reysen S. Construction of a new scale: The Reysen likability scale. Social Behavior and Personality: an international journal. 2005;33(2):201–208.

23. Goldberg LR, Johnson JA, Eber HW, Hogan R, Ashton MC, Cloninger CR, et al. The international personality item pool and the future of public-domain personality measures. Journal of Research in personality. 2006;40(1):84–96.

24. McCroskey JC, Young TJ. Ethos and credibility: The construct and its measurement after three decades. Communication Studies. 1981;32(1):24–34.

25. Bartneck C, Kulic D, Croft E, Zoghbi S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics. 2008;1(1):71–81. doi: 10.1007/s12369-008-0001-3

26. Kuznetsova A, Brockhoff PB, Christensen RH. lmerTest package: tests in linear mixed effects models. Journal of Statistical Software. 2017;82(13).

27. Fox J, Weisberg S, Adler D, Bates D, Baud-Bovy G, Ellison S, Firth D, Friendly M, Gorjanc G, Graves S, Heiberger R. Package ‘car’. Vienna: R Foundation for Statistical Computing. 2012 Oct 4.

28. Salem M, Eyssel F, Rohlfing K, Kopp S, Joublin F. To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. International Journal of Social Robotics. 2013;5(3):313–323.

29. Short E, Hart J, Vu M, Scassellati B. No fair!! an interaction with a cheating robot. In2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2010 Mar 2 (pp. 219–226). IEEE.

30. Ullman D, Leite L, Phillips J, Kim-Cohen J, Scassellati B. Smart human, smarter robot: How cheating affects perceptions of social agency. In: Proceedings of the Cognitive Science Society. vol. 36; 2014.

31. Ragni M, Rudenko A, Kuhnert B, Arras KO. Errare humanum est: Erroneous robots in human-robot interaction. In: Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on. IEEE; 2016. p. 501–506.

32. Mirnig N, Stollnberger G, Miksch M, Stadler S, Giuliani M, Tscheligi M. To err is robot: How humans assess and act toward an erroneous social robot. Frontiers in Robotics and AI. 2017;4:21.R Core Team (2016). R: A language and environment for statistical computing.


Článek vyšel v časopise

PLOS One


2019 Číslo 11