paint-brush
Twitter Bots Spread Misinformation, Shape Opinions, & Amplify Polarization by Exploiting Human Biasby@ethnology
New Story

Twitter Bots Spread Misinformation, Shape Opinions, & Amplify Polarization by Exploiting Human Bias

by Ethnology TechnologyDecember 19th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Social bots on Twitter spread misinformation, target central users, and amplify polarization. Despite weak network integration, their indirect influence skews recommendation systems, exploits human biases, and shapes political discourse, harming democratic communication and social cohesion.
featured image - Twitter Bots Spread Misinformation, Shape Opinions, & Amplify Polarization by Exploiting Human Bias
Ethnology Technology HackerNoon profile picture


This is Part 5 of a 12-part series based on the research paper Human-Machine Social Systems.” Use the table of links below to navigate to the next part.

Abstract and Introduction

Human-machine interactions

Collective outcomes

Box 1: Competition in high-frequency trading markets

Box 2: Contagion on Twitter

Box 3: Cooperation and coordination on Wikipedia

Box 4: Cooperation and contagion on Reddit

Discussion

Implications for research

Implications for design

Implications for policy

Conclusion, Acknowledgments, References, and Competing interests

Box 2: Contagion on Twitter

Social bots on the micro-blogging platform Twitter (re-branded as X in 2022) are covert automated accounts designed to impersonate humans to boost followers, disseminate information, and promote products. Bots and bot detection methods have co-evolved, resulting in increasingly more sophisticated imitation or detection strategies [177,178,179,180,39,181,182], but detection is inherently limited due to the overlap between covert autonomous bots, managed user accounts, hacked accounts, cyborgs, sock-puppets, and coordinated botnets [24,183,22,184]. Estimates suggest that 9-15% of Twitter users are bots185,182, with bot activity typically increasing around controversial political events [186].


Twitter social bots, who do not follow social instincts but neither succumb to fatigue, engage less in social interactions via replies and mentions than humans but produce more content [187]. The bots mainly retweet – a passive strategy to indicate support and gain followers – but are less successful in attracting friends and followers than humans [182]. Overall, they are less connected and bot-bot (2%), bot-human (19%), and human-bot (3%) interactions are considerably less common than human-human interactions (76%) 1[86].


Despite their rudimentary social behavior and weak network integration, Twitter bots significantly influence political communication, public opinion, elections, and markets. They play an important role in misinformation dissemination in relation to political events [188,189,190,191,192,193,194], COVID-19[195,196], and stock market investment [197]. Bots can affect human interaction networks by encouraging followings and conversations [198] and amplify low-credibility content early on by targeting influential humans through replies and mentions [192]. Bots’ large numbers enhance their visibility and influence to trigger deep information cascades [199]. Bots equally link to true and false news from low-credibility sites, but people prefer false content, making humans ultimately responsible for the spread of false news [200].


Twitter bots significantly contribute to negative sentiment and conflict escalation. Acting from the periphery, they target central human users to exert indirect influence. They amplify existing negative sentiment and selectively promote inflammatory content, often targeting only one of the factions[186]. Their success stems from exploiting human tendencies to connect with similar others and engage with messages that reinforce their beliefs [201]. Consequently, bots increase ideological polarization and negatively affect democratic discourse on social media, as seen in the 2016 US presidential election[202], the 2016 UK Brexit Referendum[201], and the 2017 Catalan independence referendum[186].


In sum, Twitter’s covert social bots are considered harmful, prompting the platform to cull them[203,204]. Their strength lies in indirect action: they skew the platform’s recommendation system to bias content popularity [137] and exploit human behavioral weaknesses like attention seeking, confirmation bias, moral outrage, and ideological homophily.


Authors:

(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;

(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;

(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;

(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.


This paper is available on arxiv under CC BY 4.0 DEED license.