The human use of emotional machines

Written by santiagorenteria | Published 2017/11/08
Tech Story Tags: artificial-intelligence | philosophy | ethics | technology | robots

TLDRvia the TL;DR App

Robots are no longer just an instrument of mass production, but they are rapidly integrating into our lives. The Japanese Ministry of Economy (METI) predicts the emergence of a neo-mechatronic society, in which robots will participate in recreational activities and elderly care. In addition, the United States anticipates the inclusion of robots in cooperative tasks with humans within the following areas: agriculture, energy, defense and space exploration (Royakkers, M & Est, 2016).

Rosalind Picard (1997) mentions in Affective Computing, some of the problems that can arise when endowing computers with affective understanding, response and expression. However, the author believes that development of affective computing is necessary for computers to interact naturally with humans.

Humanity inhabits an artificial environment, we no longer live in nature, but within a created super-nature, in a new day of the Genesis: technology (Gasset, 1992). Understanding the implications of technological innovation is a political, philosophical and ethical issue, for which is necessary to ask about its consequences. 21st century is confronting us with new forms of interaction with computers, so it is of great importance anticipating ethical implications of Affective Computing and Artificial Intelligence.

The goal of this paper is to initiate a discussion on ethical issues of Affective Computing, defined by Picard (1997) as a research and development framework focused on providing computers emotional skills. For this purpose, we will carry out an ethical and philosophical analysis of two potential problems of artificial affective agents: social disqualification and computational anthropomorphism.

“Our experiences are always intersubjective, which means that when we are in the world we are necessarily with others. Thinking of isolated human beings is impossible.” (García & Traslosheros, 2012).

Moreover, to relate with others we require emotional skills. In the case of children this is more salient, because they mostly communicate by affective non-verbal means (i.e. screaming, crying) before fully acquiring a language (Picard, 1997). This confronts us with the question regarding the influence of affective computation on children emotional development, since artificial emotional agents when interacting with them can increase social disqualification.

Social disqualification refers to the erosion of critical skills for maintaining human relationships. According to Sparrow (2002), robots imitating emotions could lead to an ethical issue, given we could confuse our creations for what they are not. Regarding this problem the psychologist Sherry Turkle (2002) predicts the following: children will get used to the almost perfect friendships of robots (programmed positively to satisfy users), and therefore will not learn to deal with people, as well as with their own emotions.

The above mentioned problems are due in large part to people assigning psychological and moral status to robots or programs, when previously they were only attributed to human beings (Melson et al., 2009). In Royyakkers et al. (2016) it is shown how this impacts children social and emotional development, since they are increasingly in contact with this type of technology.

Social disqualification in children is a potential problem of emotional robots, such as NAO (“ASK NAO Autism Solution for Kids”, 2017) and Pepper (“Who is Pepper?”, 2017). Both have different sensors and configurable programs capable of responding to and recognizing emotions. Even if one of the purposes of these tools is to aid children emotional development, careful consideration must be given to their psychological impact before implementing the solution in schools.

Pepper, the emotional robot

Pearson et al. (2014) mention designing educational robots for children requires paying close attention to the ways in which particular characteristics of design influence short- and long-term development. This is because human-robot interaction has the potential to alter not only mental development of children, but also their interactions with other human beings. Design decisions should be made in a way that promote their physical, psychological and emotional health. Consequently, the whole process of creating affective technology must be analyzed from an interdisciplinary perspective including both humanities and science.

On the other hand, anthropomorphizing artificial affective agents has consequences on the quality of judgments we make about information provided by computers. Rosalind Picard (1997) mentions it is farily common for us to trust results coming from computers since they are predictable and do not incur in biases (which have not been part its design per se). However, there is not enough reason for an affective computer to be incapable of mimicking emotional traits, and using this to deceive and extract private information. Ergo, for maintaining human integrity designers must know the limitations and potential behavior of affective computers.

Installing Samantha

A particular case of computational anthropomorphism is Siri, Apple’s personal assistant. Its designers endowed her with natural language capabilities, but sometimes she doesn’t work as expected. José Mendiola emphasizes the reliability problem that Siri faces not only when understanding its users, but also for providing pertinent answers (Mendiola, 2016). This is just one case of a personal assistance system that still does not fully implement the possibilities of affective computation, as is the case of Samantha in the sci-fi movie Her (2013). However companies that use artificial conversational agents must do an ethical review of its operation; We have enough background of potential risks with Twitter’s violent chat-bot (Wakefield, 2016).

Furthermore, Artificial Intelligence is everyday closer to passing the Turing test (“Computer AI passes Turing test in ‘world first’”, 2014). As we approach this behavioral limit, the need for ethical reflection on Artificial Intelligence systems becomes more evident. Hence it is necessary to make an ethical review of technology and science.

In conclusion, we can not ignore the effect of our emotions on others, let alone the effects of technology on emotional development; since increasingly these artifacts are mediating human emotional relationships. Curiously enough, it is generally believed that emotions have nothing to do with intelligence, but research in the field of cognitive sciences (Simon, 1967; Hofstadter & Dennett, 1981; Gelernter, 1994;) shows that emotions have a great impact on learning, memory, problem solving, humor and perception. We have focused too much on the negative side of emotions, taking them only as irrational side-effects, when humans are more emotional that we would like to think.

We should remember technology is not a neutral entity. This belief has led us to idealizing technology, as a tool that can solve any problem without fully considering its nature and social impact. This form of wishful thinking takes us away from analyzing the value and purpose (discourse) embedded in technology (Bateman, 2015). This means an object is not good just because it is technological, for each object we have designed extends our abilities, impacting the personal and social sphere.

The Kantian thought dividing categorically objects from subjects has been taken to the extreme, to the extent subjects receive a special moral status apart from their objects, products (means) the subject creates for achieving a particular end. In other words, we ignore how human intentions are implemented in their objects, treating them as a morally neutral means, while ways for being good or bad are increased with technology.

Every computational machine is conceived as a material assemblage conjoined with a unique discourse that explains and justifies the machine’s operation and purpose. ( Johnston, 2010)

Drawing Hands (M. C. Escher)

Finally, the critical problem is not computers gaining the status of subjects, and taking over the world (by substituting human beings through the acquisition of emotions, or carrying out the activity of thinking), but the degree of control that automated systems have over their faculties for guaranteeing human well-being. We should start assigning responsibilities to every aspect and element of technology development and use. We have no excuse even in the face of the inappropriate behavior of the most advanced artificial intelligence, for if computers acquire autonomy and become non-deterministic, how can we avoid catastrophes and take control over our own creations? Who will be responsible if these technological objects inherit (learn) the ethics and bias of the subject and automatically implement them in new objects (such as Von Neumann’s self-replicating machine, 1966)? If all these issues are not addressed in time by transferring subject’s responsibility to the technological process, results will be fatal, maybe as disastrous as those caused by human emotional disorders.

References:

  1. ASK NAO Autism Solution for Kids. (2017). Retrieved from: http://www.robotlab.com/store/ask-nao-autism-solution-for-kids
  2. Bateman, C. (2015). We Can Make Anything. Rethinking Machine Ethics in the Age of Ubiquitous Technology Advances in Human and Social Aspects of Technology, 15–29. doi:10.4018/978–1–4666–8592–5.ch002
  3. Computer AI passes Turing test in ‘world first’ (2014). Retrieved from: http://www.bbc.com/news/technology-27762088
  4. García & Traslosheros (2012). Ética, persona y sociedad: una ética para la vida. México, D.F.: Porrúa.
  5. Gasset, J. O. (1992). Meditación de técnica y otros ensayos sobre ciencia y filosofía. Madrid: Alianza.
  6. D. Gelernter. (1994). The Muse in the Machine. Ontario: The Free Press, Macmillan, Inc.
  7. Hofstadter, D. R., & Dennett, D. C. (1981). The Mind’s I. London: Penguin books.
  8. Johnston, J. (2010). The allure of machinic life: cybernetics, artificial life, and the new AI. Cambridge, Mass.: MIT Press.
  9. Jonze, S. (Director). (2013). Her [Video]. Estados Unidos: Warner Bros. Pictures.
  10. Melson, G. F., Kahn, Jr., P. H., Beck, A. and Friedman, B. (2009), Robotic Pets in Human Lives: Implications for the Human–Animal Bond and for Human Relationships with Personified Technologies. Journal of Social Issues, 65: 545–567. doi:10.1111/j.1540–4560.2009.01613.x
  11. Mendiola, J. (2016, October 16). Siri, ¿por qué no me comprendes? Retrieved from: https://www.applesfera.com/analisis/siri-por-que-no-me-comprendes
  12. Pearson, Y. & Borenstein, J. AI & Soc (2014) 29: 23. doi:10.1007/s00146–012–0431–1
  13. Picard, R. W. (1997). Affective computing. MIT Press.
  14. Royakkers, L. M., & Est, Q. C. (2016). Just ordinary robots automation from love to war. Boca Raton: CRC Press, Taylor & Francis Group.
  15. Simon, H. A. (1967). Models of man: social and rational ; mathematical essays on rational human behavior in society setting. New York: Wiley.
  16. Sparrow, R. ( 2002). The March of the robot dogs. Ethics and Information Technology, 4(4), 305–318. doi:10.1023/A:1021386708994
  17. Turkle, S. (2013). Alone together: why we expect more from technology and less from each other. Cambridge, MA: Perseus Books.
  18. Von Neumann, J. (1966). A. Burks, ed. The Theory of Self-reproducing Automata. Urbana, IL: Univ. of Illinois Press.
  19. Wakefield, J. (2016). Microsoft chatbot is taught to swear on Twitter. Retrieved from: http://www.bbc.com/news/technology-35890188
  20. Who is Pepper? (2017). Retrieved from: https://www.ald.softbankrobotics.com/en/cool-robots/pepper

Published by HackerNoon on 2017/11/08