paint-brush
Human-Machine Interactions Reveal Behavioral Patterns, Biases, and Autonomous Rolesby@ethnology
New Story

Human-Machine Interactions Reveal Behavioral Patterns, Biases, and Autonomous Roles

by Ethnology TechnologyDecember 19th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This section explores human-machine interactions, focusing on behavioral patterns, cognitive biases, and the evolving roles of autonomous machines. It highlights how humans treat machines like peers, yet judge them harshly, trust their advice, and are influenced by their design and behavior in shared social systems.
featured image - Human-Machine Interactions Reveal Behavioral Patterns, Biases, and Autonomous Roles
Ethnology Technology HackerNoon profile picture


This is Part 2 of a 12-part series based on the research paper Human-Machine Social Systems.” Use the table of links below to navigate to the next part.

Abstract and Introduction

Human-machine interactions

Collective outcomes

Box 1: Competition in high-frequency trading markets

Box 2: Contagion on Twitter

Box 3: Cooperation and coordination on Wikipedia

Box 4: Cooperation and contagion on Reddit

Discussion

Implications for research

Implications for design

Implications for policy

Conclusion, Acknowledgments, References, and Competing interests

Human-machine Interactions

We use the term “machines” to refer to various computational artifacts that exist in a multidimensional space ranging on embodiment (from physical devices such as humanoid robots to bots and algorithms that exist in digital space only), algorithmic sophistication (from simple expert systems that use pre-defined if-else rules to generative deep-learning models that learn from data in real-time), autonomy (from fully managed to fully autonomous), sociality (e.g., from infrastructure to human peers), multiplicity (from single large models like GPT4 to many simple models like those used in ensemble learning), heterogeneity (from the coordinated behaviors of bot farms to the diverse strategies of competing trading algorithms), and generality (from highly specialized chess bots to open-ended problem-solvers like LLMs). Here, we restrict our attention to multiple, autonomous, and social artificial agents that operate and interact in the same social environment as humans.


Although potentially relevant, in this work, we do not discuss human-assisted bots and bot-assisted humans known as “cyborgs” [22,23], coordinated bot accounts known as botnets, swarmbots, and bot farms [24,25,26,27], algorithms that perform background infrastructure-related tasks such as recommender systems [28,29], crawlers, indexers, and scrapers [30,31], smart electricity grids [32], and traffic light control systems [33], as well as hybrid traffic flows [34,35]. These systems have been extensively discussed elsewhere. We acknowledge that we may be excluding other interesting cases, including presently out-of-scope cases that future technological advancements may ultimately align with our taxonomy of “autonomous” and “social” machines.


Like humans, the machines we consider exhibit diverse goal-oriented behavior shaped by information and subject to constraints. However, the actual cognition and behavior of machines differ from those of humans. Machines’ behavior tends to be predictable and persistent [36], with higher precision and faster execution[37], better informed with access to global information[38], and less adaptable and susceptible to influence [39,40,41]. In contrast, humans tend to be limited to local information, satisfice, act with errors, learn and adapt, succumb to social influence and peer pressure, yet also exhibit opinion stubbornness and behavioral inertia; on occasions, they may also use metacognition and revise their own perceptual and decision-making models.


Humans often exhibit cognitive biases due to limited information processing capacity, bounded rationality, reliance on heuristics, vestiges of evolutionary adaptation, and emotional motivations [42,43], and algorithms trained on data generated by humans may reproduce these biases [44,45]. Research on human-like general AI aims to erase the cognitive and behavioral differences between humans and machines, while work on human-competitive AI strives for superintelligence that is faster, smarter, and more precise than humans’ [46,47]. Either way, humans will remain distinct from machines in the near future.


Research from the CASA (computers as social actors) paradigm in psychology emphasizes that humans treat and respond to machines similarly to other humans: people reciprocate kind acts by computers[48], treat them as politely as they treat humans [49], consider them as competent, but also apply gender and racial stereotypes to them[50,51]; people also humanize and empathize with machines, experiencing distress when witnessing the mistreatment of a robot [52,53].


Nevertheless, there are visible neurophysiological differences in the brain when humans interact with robots[54,55], likely because humans do not attribute agency and morals to them[56,57]. AI is perceived to have lower intentional capacity, lack self-interest, and be more unbiased than humans. Consequently, humans exhibit a narrower emotional spectrum with machines than with humans, reacting with lower and flatter levels of social emotions such as gratitude, anger, pride, and a sense of fairness[58,59,60,61], yet judging machines more harshly when they commit mistakes, cause harm, or incur losses[62,63]. Further, humans behave more rationally and selfishly with machines, cooperating and sharing less and demanding and exploiting more [64,65,66,67]. People would design a machine to be more cooperative than they are themselves [68] but act pro-socially towards it only if it is more human-like[69], or if it benefits another human[70]. Compared to a single person, small groups of people are even more likely to exhibit competitive behavior and bullying toward robots [71].


Despite this intergroup bias, humans are still susceptible to machine influence when making decisions or solving problems[72]. Robots can cause both informational and normative conformity in people [73,74] and AI and ChatGPT can corrupt humans’ moral judgment and make them follow unethical advice [75,76]. Humans tend to trust algorithmic advice more than advice coming from a another human or a human crowd[77,78] but may also avoid it if they perceive a threat to their decision control or a lack of understanding and cognitive compatibility [79,80].


Authors:

(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;

(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;

(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;

(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.


This paper is available on arxiv under CC BY 4.0 DEED license.