paint-brush
Bot Strategies: Manipulating Democratic Discourseby@humanid

Bot Strategies: Manipulating Democratic Discourse

by humanID.orgNovember 25th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Two-thirds of Americans know what bots are, from a strictly negative perspective—and not without good reason. But bots are not inherently forces of manipulation and disinformation, they are simply automated programs that run tasks repetitively. Bots take on an enormous amount of labor in today’s Internet; with “good” bots consisting of 13% of Internet traffic and ‘bad’ bots 24% of traffic. Bots play a large role in spreading low-credibility articles before they are picked up by humans and go viral.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Bot Strategies: Manipulating Democratic Discourse
humanID.org HackerNoon profile picture

Two-thirds of Americans know what bots are, from a strictly negative perspective—and not without good reason. Malicious bot activity has evolved over time, from hackers using bots to infect millions of devices with malware to the more modern usage as spreaders of propaganda, fake news, and false social media accounts. 

But bots are not inherently forces of manipulation and disinformation. They are simply automated programs that run tasks repetitively. In the early age of the Internet, bots took on good-intentioned service roles. Some of the earliest bots were merely web crawlers for search engines, organizing web pages for users.

In fact, bots still do some of the work necessary for our most-loved tools to function. Google relies on bots to pull up relevant material from our searches. Other sites, like Facebook, use chatbots to answer user questions. Bots take on an enormous amount of labor in today’s Internet; with “good” bots consisting of 13% of Internet traffic and “bad” bots 24%.

The current bad reputation of bots has much to do with their swaying of public perception and creating false illusions of reality on social media. The presence of coordinated information campaigns has bolstered this image, and considering the disastrous effects of bots during the recent Brazilian election among others, the increasingly sophisticated strategies of social media bots merit a closer look. 

On social media platforms, one primary usage of bots relies on amplifying certain content through tools like sharing, liking, tagging, or taking over hashtags. Political campaigns push out extremist content, conspiracy theories, and pro- or anti-candidate messages.  One Syrian botnet successfully crowded out tweets about the Syrian Civil War by overwhelming them with other content. The main goals of this behavior is to obscure certain topics, amplify the bot’s message, and falsify metrics. 

As is more commonly known, bots are major purveyors of fake news and propaganda. Bots play a large role in spreading low-credibility articles before they are picked up by humans and go viral. This has sparked public discussion about the purposeful sabotage of democratic processes. Notably, WhatsApp received widespread condemnation for its role in influencing Brazil’s 2018 election. Rampant bots on the messaging app were a major source of fake news in President Bolsonaro’s favor. 

Interestingly, bots can contribute to the credibility of public figures. Many politicians have been criticized for inflating their Twitter followers through bot accounts. By artificially boosting content, bots also affect the perceived credibility. When a hashtag or issue goes viral or is trending, there is an implied sense of truth and reality that underscores it.

All of these strategies work to create false impressions or sow selectively chosen content in the public sphere. Sparking discussion over topics that would normally go unnoticed, drawing out social movements that work against one’s political agenda: bots are gradually molding people’s perceptions about what people care about and what are the most important issues to be addressed. This manipulation is subtle and difficult for an ordinary user to track, especially when bots are so pervasive and numerous. Rather than one entity forwardly pushing an agenda, bots meld alongside everyday users, appearing inconspicuous. 

Bot strategies are continuously being cultivated and refined. We can’t know what bot activity will look like in the near future, but we do know that distinguishing a bot account from a real account can elude even a media-savvy user’s grasp. With this in mind, and a growing underground digital economy that rents out botnets, a whole host of implications emerges for the future of politics. Already, we are seeing the consequences of bots in politics, spreading smoke screens and contributing to an artificial digital landscape. 

If the only barrier to hiring a botnet is price, political figures and entities can weaponize them for their political agenda, eroding Internet freedom. Although individual users can arm themselves with the tools to identify a bot, this does little to prevent the general landscape of a social media platform from being altered. Completely banning bots is almost impossible for most platforms, especially if they still want to allow non-nefarious bots. In any case, platforms have no incentive to eliminate bots when their business model partly relies on high engagement levels—which bots contribute to. 

It’s true that social media platforms have taken steps to combat bots, from WhatsApp to Twitter, however these efforts are falling far short of successful.

Instead, we need to fall back on public policy to regulate platforms and protect public discourse from distortion and outside interference. Although it is hard to measure the exact scale of impact bots have on people’s perceptions, legislating Big Tech to weed out bot accounts and identify bot networks is not out-of-reach. 

Lead image via Arseny Togulev on Unsplash.