This is Part 11 of a 12-part series based on the research paper “Human-Machine Social Systems.” Use the table of links below to navigate to the next part.
Box 1: Competition in high-frequency trading markets
Box 3: Cooperation and coordination on Wikipedia
Box 4: Cooperation and contagion on Reddit
Conclusion, Acknowledgments, References, and Competing interests
We urge a system-focused approach to AI policy and ethics: policymakers should approach AI not as a single existential threat but as a multiplicity of machines and algorithms. Machines are often more beneficial when they are superhuman or simply “alien” and when they are diverse. Similarity in information sources, interaction speeds, optimization algorithms, and objective functions can cause catastrophic events, like flash crashes in markets. Thus, while AI designers may chase optimization and superintelligence, policymakers should focus on the diversity of human-machine ecologies. Policymakers should demand adaptivity and resilience, too.
Policymakers should also anticipate the social co-evolution of machines and humans, which will inadvertently change existing institutions. Machines can cause humans to withdraw interaction: for instance, outsourcing care to robots reduces caregivers’ empathy [246]. Intelligent machines are changing the transmission and creation of human culture, altering social learning dynamics, and generating new game strategies, scientific discoveries, and art forms [247]. Humans must adapt to autonomous machines just as autonomous machines must learn from and adapt to humans. Finally, ethicists should address questions such as: Should all machines be equal? Should we allow status hierarchies, possibly reflecting and exacerbating existing socio-economic inequalities?
Authors:
(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;
(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;
(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;
(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.
This paper is available on arxiv under CC BY 4.0 DEED license.