paint-brush
Trust and Acceptance of Social Robots: Referencesby@netizenship
108 reads

Trust and Acceptance of Social Robots: References

tldt arrow

Too Long; Didn't Read

This article is a collaboration between the Annenberg School for Communication and the Marshall School of Business at the University of Southern California, Los Angeles. The authors will be presenting their findings at the 14th ACM/IEEE International Conference on Human-Robot Interaction in New York, NY.
featured image - Trust and Acceptance of Social Robots: References
Netizenship Meaning in Online Communities HackerNoon profile picture

Author:

(1) Katrin Fischer, Annenberg School for Communication at the University of Southern California, Los Angeles (Email: [email protected]);

(2) Donggyu Kim, Annenberg School for Communication at the University of Southern California, Los Angeles (Email: [email protected]);

(3) Joo-Wha Hong, Marshall School of Business at the University of Southern California, Los Angeles (Email: [email protected]).

Abstract Introduction & Related Work

Method

Analysis & Results

Discussion & Conclusion

References

REFERENCES

[1] M. Deutsch, The resolution of conflict: Constructive and destructive processes. Yale University Press, 1973.


[2] J. Riegelsberger, M. A. Sasse, and J. D. McCarthy, “The researcher’s dilemma: Evaluating trust in computer-mediated communication,” International Journal of Human Computer Studies, vol. 58, no. 6, pp. 759–781, 2003.


[3] J. K. Rempel, J. G. Holmes, and M. P. Zanna, “Trust in close relationships,” Journal of Personality and Social Psychology, vol. 49, no. 1, pp. 95–112, 1985.


[4] J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate reliance,” Human Factors, vol. 46, no. 1, pp. 50–80, 2004.


[5] D. Ullman and B. F. Malle, “Measuring gains and losses in humanrobot trust: Evidence for differentiable components of trust,” in Proceedings of the 14th ACM/IEEE International Conference on HumanRobot Interaction, ser. HRI ’19. IEEE Press, 2019, pp. 618–619.


[6] R. C. Mayer, J. H. Davis, and F. D. Schoorman, “An integrative model of organizational trust,” Academy of management review, vol. 20, no. 3, pp. 709–734, 1995.


[7] W. Kim, N. Kim, J. B. Lyons, and C. S. Nam, “Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modelling approach,” Applied Ergonomics, vol. 85, may 2020.


[8] C. M. Carpinella, A. B. Wyman, M. A. Perez, and S. J. Stroessner, “The robotic social attributes scale (RoSAS) development and validation,” in ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: IEEE Computer Society, 2017, pp. 254–262.


[9] S. T. Fiske, A. J. Cuddy, and P. Glick, “Universal dimensions of social cognition: Warmth and competence,” Trends in Cognitive Sciences, vol. 11, no. 2, pp. 77–83, 2007.


[10] A. J. Cuddy, S. T. Fiske, and P. Glick, “The BIAS Map: Behaviors From Intergroup Affect and Stereotypes,” Journal of Personality and Social Psychology, vol. 92, no. 4, pp. 631–648, 2007.


[11] B. Reeves, J. Hancock, and X. Liu, “Social robots are like real people: First impressions, attributes, and stereotyping of social robots,” Technology, Mind, and Behavior, vol. 1, no. 1, 2020.


[12] C. Nam and J. Lyons, Eds., Trust in human-robot interaction: Research and applications. Elsevier, 2020.


[13] P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. De Visser, and R. Parasuraman, “A meta-analysis of factors affecting trust in human-robot interaction,” Human Factors, vol. 53, no. 5, pp. 517– 527, 2011.


[14] S. Naneva, M. Sarda Gou, T. L. Webb, and T. J. Prescott, “A systematic review of attitudes, anxiety, acceptance, and trust towards social robots,” International Journal of Social Robotics, 2020.


[15] M. Salem and K. Dautenhahn, “Evaluating trust and safety in HRI: Practical issues and ethical challenges,” Emerging Policy and Ethics of Human-Robot Interaction, 2015.


[16] M. Salem, G. Lakatos, F. Amirabdollahian, and K. Dautenhahn, “Would you trust a (faulty) robot?: Effects of error, task type and personality on human-robot cooperation and trust,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on HumanRobot Interaction. ACM, 2015, pp. 141–148.


[17] F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS Quarterly: Management Information Systems, vol. 13, no. 3, pp. 319–339, 1989.


[18] V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis, “User acceptance of information technology: Toward a unified view,” MIS Quarterly: Management Information Systems, vol. 27, no. 3, pp. 425– 478, 2003.


[19] M. J. Mataric and B. Scassellati, “Socially assistive robotics,” in ´ Springer Handbook of Robotics, K. O. Siciliano B., Ed. Springer, 2016, pp. 1973–1994.


[20] M. S. Fritz and D. P. MacKinnon, “Required sample size to detect the mediated effect,” Psychological Science, vol. 18, no. 3, pp. 233–239, 2007.


[21] A. J. Fairchild, D. P. MacKinnon, M. P. Taborga, and A. B. Taylor, “R2 effect-size measures for mediation analysis,” Behavior research methods, vol. 41, no. 2, pp. 486–498, 2009.


[22] D. L. Streiner, “Finding our way: An introduction to path analysis,” Canadian Journal of Psychiatry, vol. 50, no. 2, pp. 115–122, 2005.


[23] A. F. Hayes, Introduction to mediation, moderation, and conditional process analysis: A regression-based approach, 3rd ed. The Guilford Press, 2022.


This paper is available on arxiv under CC 4.0 license.