How do we approach robots: anthropomorphism, the intentional stance, cultural norms and values, and societal implications
Authors: Marchesi, S. & Wykowsk, A.
To be published in DeGruyter Handbook of Robots in Society and Culture
Date:
June 2023
Sharing our social environments with social robots is an experience that is becoming more and more common (Dautenhahn, 2007; Prescott & Robillard, 2021). Thus, it is pivotal to understand how we integrate such artificial agents into our societies and the ethical consequences of this integration. Thus, understanding how we relate to artificial agents should be based on the examination of not only the individual human (Wykowska, 2021), but also cultural contexts and the societal expectations we absorb growing up in our societies.
Artificial agents can be considered technologically opaque (Surden & Williams, 2016), meaning that their technological complexity (i.e., the complexity of their hardware and software) makes them perceived as in-between entities: they are clearly man-made artifacts, nevertheless, they can potentially be perceived as social actors. Some authors argue that the potential double nature of social robots can constitute even a new ontological category (NOC) (Kahn, & Shen, 2017), thus, investigating people’s relationship towards them is of high interest to social and cognitive scientists. The purpose of the present chapter is to explore the different factors that contribute to the perception of artificial agents, such as humanoid robots, as potential social agents and how these factors can contribute to the creation and design of social robots that will be accepted as part of our social environments. We focus on three major factors: anthropomorphism, intentional stance, and cultural contexts. Finally, we will discuss the importance of including such factors in the design of social robots and how these choices can affect the ethical and moral implications for our society.