Kevin Brown’s twitter profile (@kevbrown618) describes him as a 26 year old progressive from Pittsburgh, Pennsylvania. He’s an active member of #TheResistance and readily contributes to liberal social media campaigns such as #FireHanity (an effort to convince Fox News host Sean Hannity’s sponsors to turn their backs on his program, effectively stripping it of funding). His insights earned him retweets and likes by CNN contributor Jason Kander and hundreds of other people; his content has been seen over 131,000 times in just over 6 weeks.
Kevin is a pretty popular guy. He is also a Twitter bot.
There is a mythology around what Twitter bots are capable of because of the role they played in the 2016 Presidential election. I launched “Kevin Brown” in late July/early August as one of my weekend projects and my personal attempt to investigate the real impact Twitter bots could have on our public discourse.
The bots used during the 2016 election worked by pumping out high volumes of pre-programmed content or by sharing content produced by others that agreed with a certain assigned agenda. Content that was shared enough by bots would eventually be seen and shared by a real person, distributing the information to their personal network. People in that network would then see information coming from a source they trusted and possibly share the content themselves. Then the process repeats.
I wanted Kevin to be fundamentally different. Rather than spamming until a few people decided his content was worth sharing, Kevin would build a reputation as a trusted source that could be leveraged when distributing content. Building a bot that was capable of earning the trust of other users would require a degrees of autonomy and artificial intelligence not typically seen in Twitter bots. This would clearly be more work up-front, but the payoff would be less effort when attempting to get content shared by others.
To make Kevin convincing, he needed to be well disguised. I used a random name generator to come up with Kevin Brown, and then a random date generator to give him a birthday of June 18, 1991. From there, I searched Flickr for a picture of a guy in his 20’s that allowed commercial use. The Twitter handle “kevinbrown” was already taken, so “kevbrown618” would do. By placing him in western Pennsylvania, where President Trump’s rust-belt base lives, Kevin could play the role of embattled liberal looking for like-minded friends on the internet. I reinforced the geographic connection with a cover photo of Pittsburgh’s PNC Park and wrote a brief bio to complete the window dressing:
Proud #Progressive and #Democrat living in the middle of “Trump Country” #NotMyPresident!
One of the giveaways that you’re looking at a Twitter bot is that the account will only distribute content on one topic, and it will do so frequently.
To be more realistic Kevin needed some “personality” or interests outside of criticizing Donald Trump (even if that was a favorite pastime). To accomplish this, I planned to write Kevin so he would occasionally share a funny cat video or make some cultural references.
This wouldn’t produce results that were anywhere near as realistic as a human’s account, but it would hopefully help throw people off since it isn’t behavior you would typically expect from a conventional Twitter bot.
The next step was to bring Kevin to life by writing the code that would drive him.
I made Kevin a liberal bot because I’m a liberal person; it was a persona that I could easily gut-check during development. His code, however, is extremely versatile. One word in his code is all that keeps Kevin from becoming one of President Trump’s biggest fans. Similarly, he could just as easily be a beer snob or a devoted football fan.
Regardless of his slant, Kevin’s reputation hinged on two factors: his content and behavior.
To build a reputation, Kevin needed to become familiar to his network. Accomplishing this required him to engage with others’ content as well as produce original content for others to engage with. Finding content to share was easy; Kevin simply retweets or favorites popular material from his network. In this way Kevin is just like his nefarious Russian counterparts.
Original content production is more complicated. To simplify things, Kevin can only create two kinds of content: a basic tweets and commentary on popular links floating through his network.
Each time Kevin checks Twitter, he analyzes hundreds of tweets and builds a statistical model of what words are typically used together. For example: “President” is often followed by either “Donald” or “Trump”. Kevin then picks a random starting word and uses his model to determine the next word, repeating until he’s created a full tweet. This process, known as a Markov chain, is used to create all of Kevin’s original content.
The approach works really well because it gives Kevin the ability to correctly and easily use slang, hashtags, and emoticons — all of which give him a much more human feel. As Twitter collectively responds to events, Kevin is able participate by inserting his own commentary into his network. His followers see this content and often share it.
There never was any hope for Kevin if he behaved like a robot. Too much tweeting, tweeting on a rigid schedule, or tweeting at odd hours are all huge hints that a tireless computer is at the wheel rather than a human. To address this, I built Kevin a more natural schedule. He only checks Twitter at random intervals throughout the day and then randomly decide what (if any) action to take.
For added effect, Kevin also “sleeps” and “works”. His account is inactive each night while he sleeps, and activity slows during weekdays when he should be at work.
Kevin is mostly autonomous, but I do still select who he follows and I reserve the right to step in and censor him.
Kevin only follows people to increase his exposure and attempt to get those people to follow him back. Right now, I select these accounts by occasionally sorting through various hashtags and keywords to find people who might be interested in him.
I also supervise Kevin incase he sends out content that is offensive or derogatory. Bots that live on, and learn from, the internet sometimes venture into the bad parts of the web where they pick up equally bad habits. One example of this was Tay, a bot built by Microsoft that started by replicating a 16 year old girl before becoming a Nazi sympathizer in less than 24 hours.
Kevin has an obscenity filter built in and I’ve taken steps to prevent him from following in Tay’s digital footsteps, but you can never be too careful. Other than one early tweet, that I deleted because Kevin had remarked that the Statue of Liberty was a symbol of hate, I have been able to stick to a “hands-off” policy.
My experiment turned out far better than I could have hoped. Not only is Kevin functional, but he is also able to promote original and shared content well beyond his immediate network of followers.
Here’s a brief rundown by the numbers since August 4 (Kevin’s first full day online):
Disappointingly, it is almost impossible to get an answer for what “good” numbers look like on Twitter because engagement rates vary based on a lot of variables. And there is absolutely no data on what to expect for a bot attempting to pass as a human. The best information I could find seemed to indicate that personal Twitter accounts get an average of 1–2% engagement, which puts Kevin right where he should be.
For third-party validation I went to Botometer. Botometer is a project by Indiana University that scores accounts based on how likely they are to be a bot. When greater than 50% means an account is likely a bot, Kevin scored an impressive 37%. He’s got a long way to go, though — my personal account scored only 18%.
People also feel comfortable enough with Kevin to reply to his tweets and attempt to start conversations. The earliest example was just four days after his launch, when his tweet writing abilities were still pretty sub-par. In an otherwise garbled tweet, he criticized Trump for retweeting misinformation and announced that he had gone to prison at some point. One of the people mentioned in the tweet, a pro-Trump conservative, responded (albeit rudely).
Other interactions have been more positive:
Kevin is far from a complete success. He has a number of flaws that I either don’t have the skills to fix or don’t have the time to (or both).
Low quality tweets
Sometimes Kevin really sucks at tweeting. This is mostly because he looks at how words fit together, but has no concept of their meaning.
I took some steps to correct and improve his output, but the core issue has never been fully resolved.
Kevin also has a bug that sometimes causes him to repeat words while composing content. The results are some of his stranger, but more entertaining tweets. These tweets tend to occur later in the day, which might have something to do with the problem. It also looks like he might enjoy drunk tweeting.
Kevin occasionally posts conservative, or even pro-Trump tweets. At first this issue really surprised me. When I dug into the bug, though, it made sense that randomly stringing together words might occasionally result in coherent sentences that were ideologically opposed to the source material.
For example, Kevin looked at these two tweets:
And composed this one:
It is a good tweet in that it gets close on grammar and is coherent. It isn’t, however, the kind of thing that you would expect a 20-something liberal to send out. And the worst part is that it is directly at odds with his network.
If Kevin learns from tweets that contain links or references to pictures, he will sometimes use the text without the associated content. The result is an incomplete tweet that directs the reader to click a link that isn’t there.
He will also compose tweets that trail off.
This is far from the worst thing he does, but it looks very “robot-ish” and can raise suspicion if he does it on a regular basis.
The biggest thing I wanted to build but never did was a chatbot feature. Based on my original plans, Kevin would increase his engagement (and credibility) by having conversations with people in his network. As I attempted to implement it, however, I realized the feature would be incredibly complicated or impossible to build.
Chatbots need to be able to respond to a wide array of statements as well as understand how conversations flow from topic to topic. This requires training the bot on an enormous amount of data. The more topics a chatbot is responsible for knowing, the more data it would need to train with. Kevin would have needed so much data to talk about political events in real time that the feature became impractical.
Over the 6 short weeks of Kevin’s life he has become a frequent topic of conversation around my office and at home. My co-workers have even started putting in requests for their own versions (keep an eye out for Twitter’s biggest Cleveland Browns fan).
Watching Kevin’s following grow has taught me a lot about how artificial intelligence can be used to engage people on issues they’re passionate about. I decided to unmask him now to hopefully open up a conversation about how Kevin, and bots like him, can be improved and used. There is incredible value in technology that can facilitate conversation by building connections between people. The same technology, however, can just as easily be used to spread misinformation. We need to be aware of this more sinister capability while building and interacting with bots. Policing these harmful bots isn’t easy and requires individuals to be more accountable for personally scrutinizing media sources we read.
I don’t know exactly what the future holds for Kevin Brown, but I look forward to continuing to work with whatever comes next for him