Internet connectivity is important because it provides educational and social benefits for people. Unfortunately, these positive attributes are counterbalanced by potentially dangerous consequences.
Alongside improving communication and democratizing access to information, the internet lets people conceal themselves behind a mask of anonymity. It creates a whole new set of risks for children – and often adults too.
I still remember an incident in 2017 when news came up about a young mom who committed suicide due to cruel trolling online. And guess what most of those troll accounts were impersonating and fake. A survey found that 1 of 4 people using online social media is a victim of trolls. And these are just stats for famous social media. Trolls on other online communities like gaming communities are another story.
Fake and bot accounts on social networks are not only responsible for harassment, cyberbullying, spreading fake news, and spamming but also for creating fake propaganda against organizations and governments all around the world. In the run-up to the 2018 midterms in the United States, “bots were used to disenfranchise voters, harass activists, and attack journalists,” says Sam Woolley, the director of the digital lab at the Institute for the Future. “But at a fundamental level, Facebook and Twitter are disincentivized from doing anything about it”.
Recently, eth co-founder in a tweet warned about 5000+ impersonating accounts on Instagram using his name. And Binance CEO, CZ, highlighted how fake accounts are using his company’s name to scam people on Linkedin.
This and many such incidents – although a harsh reality – is just the tip of the iceberg. Here I want to emphasize that the problem isn’t highlighting these issues. I am pretty sure most of the readers are already aware of how bad is internet when it comes to fake accounts. So the questions that I will try to find answers to in this article are How do these fake accounts easily get away? What are big social media platforms doing to mitigate fake accounts (if they are doing)? What’s stopping them from proper actions? and in the end, we will discuss possible solutions to deal with fake accounts once and for all.
Let’s start with the first question.
There are people and institutes online that preach online safety and how you should deal with these issues. Some will tell you not to give a f***. Others will emphasize reporting fake accounts. Facebook claimed to remove 4.5 billion fake accounts in 2020. Researchers have found that as many as 15% of Twitter accounts are bots, which drive two-thirds of the links on the site.
But have any of these stopped the creation of fake accounts? The answer is No! In the past few weeks, Musk has tweeted about Twitter’s bot problem, going so far as to cite the issue as a reason for potentially pulling out of his deal to buy the platform. In my opinion, the problem lies in the approach to deal with these issues i.e. identifying and banning the accounts.
While looking for the solution, we will have to identify the hurdles that stop or slow down the companies to remove the fake accounts. Here are some problems:
Already in Place Policies
Twitter’s effort to catch impostor accounts is complicated by its policy allowing parody accounts. Although, the company requires parody accounts to be clearly labeled. Facebook also struggles with accounts that pose as those belonging to public figures.
But the policies aren’t the only problem. There are other problems as well.
Lack of Resources and Efforts
One of the real issues is that the identification and removal of these accounts require an immense amount of resources. But why should social media platforms allocate these resources? Before the 2018 mid-terms, the companies had no incentive to put in the effort and resources. However, after the issue became big, it has become inevitable for the companies to improve the health of the platforms. Jack Dorsey in a tweet spoke about the issue and said the company would take measures to increase the health of the platform.
Even though the social media platforms are trying to get rid of these spam accounts both manually and by automating the identification of the fake accounts, new fake accounts come up every day. How do they deal with these new accounts created every day? This question leads us to our next hurdle.
Lack of Interest and Incentive in Tackling the Issue
The Internet is mainly popular as a source of information and monetization. And monetization rules the internet. A report highlights that $189 billion were spent on digital advertising in 2021. In simple terms, platforms like Twitter and Facebook earn through advertisements and the success of these advertisements is by large measured by the number of clicks and views.
As a digital marketer, I have run ads on both Twitter and Facebook and to my surprise, most views on my ads were bot accounts. It isn’t just true in my case. Elon Musk while canceling the Twitter deal, highlighted the issue of bots being used to show the fake success of the ads. In short, Fighting spam and bots with automation will require leveling up with the investors on how this will impact the platform’s bottom line. It wouldn’t be wrong that companies’ business models require bots and fake accounts.
While we have looked at the issues with this approach of identifying and dealing with fake accounts, everyone might be thinking if this isn’t a good approach then what alternative approach do we have?
While working on this article, I did a lot of research on potential solutions, and the abstract of all this research is Accountability. You see when someone creates a fake account impersonating someone else, there isn’t anything at stake for the impersonator. Maximum punishment (a harsh word but I find it most suitable here) is that the account will be banned. And they can easily create another account afterward.
So, the point of all this is that online platforms need a system that will hold impersonators accountable in some way or other. Holding them accountable doesn’t mean sending them to jail just for creating a fake account. That isn’t possible because this will require the involvement and consent of all the governments to pass laws. However, a punishment like banning the user from the platform for a certain amount of time depending on the severity of the misconduct is doable.
In simple words, a system where fake accounts will not only be deleted but the person creating them will be banned from using the platform. This is only possible if the platforms know who’s signing up. There are multiple ways to do so. Let’s discuss some of them here:
KYC Using Identity Documents
When talking about identification first thing that comes to mind is KYC through ID documents. After all, banks use this system. For those who don’t know what KYC is read this article. However, there are some issues with using Govt issued IDs. Firstly, they could be forged easily. Secondly, most Govts will never allow a private firm to have citizens' data. Most people online don’t want to reveal their real IDs, fearing that this identification might be used against them by companies or that hackers could exploit their data. Also, online platforms could not bear the expense of verifying the identity of each user. In addition to all these concerns, document verification comes with friction for end users.
Using Biometrics for Verification
Biometrics has clear advantages from a verification perspective. They are difficult to hack and offer far less friction for the end users. Although mainly being used for security and privacy, social platforms can use them to verify the identity as well. However, despite many benefits, there are several privacy and security concerns over the use of biometrics. For example, Malicious players can get access to users’ biometric information. The govt. will not allow any third party to collect biometrics data of their citizens.
Pseudonymous Biometric Identification with Liveness Detection
While blockchain has been the hottest topic nowadays, people still need to see its utility across non-financial industries. Alongside biometrics, blockchains can be used to make identity management more secure. With pseudonymous biometric verification, social networks can identify users without them sharing any raw biometric information. In simple words, social networks can use crypto-biometrics to verify that every user signing up on the platform is a unique identity and liveness detection ensures that it’s a real human signing in.
Crypto-biometrics work in a way that the user’s biometrics data never leaves the verification device. Instead, the data is encrypted and shared over the decentralized network of validator nodes. These nodes only have access to verify if the user is already registered in the network or not.
By using pseudonymous biometric technology, social networks can ensure i. One human can sign up for one account only. Ii. Personal information of the user is never saved across the network. Iii. No bots get access to the network. Iii. Those violating the platform's policy or spamming the network can be easily banned for a certain amount of time. Iv. No friction for the users since verification will be as easy as unlocking the iPhone.
Whether or not, social networks adopt pseudonymous biometric technology, the problem with spam and fake accounts has to be resolved. Because the incentives are shifting. Advertisers now demand companies change their policies and bring them real business. A quartz article on the issue said “Advertisers, who ultimately hold the dollars that fund the services, have also been more vocal about their issues with automation on social media. After all, brands are paying for eyeballs, and bots don’t have them.”
So it’s clear that either now or in the future companies will have to take strict action to remove fake accounts.