Despite the widespread controversy in the aftermath of the 2016 U.S. election, the public generally isn't aware that the term 'fake news' hardly captures the full extent of information manipulation on social media.
Renee DiResta, technical research manager at the Stanford Internet Observatory, has detailed the complete picture in numerous podcasts and interviews. It wasn't until I heard her speak that I was able to grasp how absolutely diabolical the adversaries' schemes were.
Though much of the public has come to see fake news as the whole story, many additional elements are at play, such as troll farms, bots, and disinformation campaigns. The Alliance for Securing Democracy explains:
The tendency to associate information operations with falsehoods or wild conspiracy theories misses an important point: the vast majority of content promoted by Russian-linked networks is not, strictly speaking, 'fake news.' Instead, it is a mixture of half-truths and selected truths, often filtered through a deeply cynical and conspiratorial worldview.
The true circumstances can only be described as full-on information warfare, and we still have no convincing strategies to fight this war. In this article, I'll explain why digital IDs (DIDs) are the solution we need and how they can be used to effectively prevent information manipulation.
Without a doubt, Russia has gained the most notoriety for its meddling in the 2016 U.S. elections, but many adversarial governments have gotten themselves involved in this new frontier of conflict.
With so many actors at play, it would be a mistake to think that this war is only conducted around periods of special events, such as election cycles, but it is also in the best interest of any single entity to put forth a continuous effort. For example, the illusory truth effect states that the more often a message is repeated, the more likely that it will be perceived as truth.
By holding a consistent conversation and creating an illusion that multiple news sources and individuals support a certain narrative, manipulation of the human psyche is more profound.
It's a very subtle process, but also a very effective one, and it requires a slow but persistent effort. It's why Robert Mueller emphasized, on numerous occasions while working as special counsel, that the conflict didn't end with the election.
Social media companies have devoted a spectrum of resources to this fight, but all they can really do is stop an aggressor after they've begun to act, and this quickly feels like a futile game of whack-a-mole. Renee pretty much nails my feelings here,
We're still thinking about this broad bucket of complaints against social media, much of which is deserved. But the challenge is how do you get at the information warfare piece of this, where we don't presently have anything that remotely resembles a deterrent.
We should also be concerned that large social media platforms may not be the only attack vectors. The internet is overflowing with websites that allow users to communicate with each other in various ways.
Much of the public's attention has been focused on what large social media companies are doing to fix the problem, but it may be worthwhile for an attacker to use smaller websites where defenses are much weaker. Another issue is that social media platforms face many accusations of bias and censorship as they maneuver to curb information manipulation. Users don't trust a centralized entity to determine what is real and what is fake, and honestly, this is a justified concern.
In order to form a successful defense that amounts to a cure rather then a symptomatic treatment, we must first be able to comprehend what the disease actually is–many research groups have managed to isolate two very important warfare strategies. The first issue is the trolls that wear a fake persona on the internet. It's routine for Russian trolls to assume the identity of a concerned American and then post content that frames the narrative they want to set. These trolls have even managed to organize real Americans to take action in the real world–all from their anonymous position on the internet. The other issue is bots which are mostly used to astroturf and amplify a narrative by engaging with certain content, though they are also capable of making generic comments.
The reason I propose DIDs as a solution is because, as the above statements show, the issue at play is fundamentally one of identity. Anonymity on internet platforms has often been criticized for this very reason. In a report published for the European Parliament, Dr. Ziga Turk, professor of Construction Informatics, writes,
Stopping bots is easy and unproblematic. Platforms need to enforce the ‘real identity’ of their users. Here, Facebook’s policy is most strict. The policy is that there should be a real person behind each account and that each person should have only one account on Facebook.
Clint Watts, renown counterintelligence and social media manipulation expert; Thomas Rid, Professor of Strategic Studies at John Hopkins; Colin Stretch, Facebook general Counsel; and the Atlantic Council's Digital Forensics Research Lab, a disinformation research group; have all testified before the U.S. Senate and expressed similar sentiments.
Verifying identity in order to use an online service isn’t a radical concept. 'Know your client' (KYC) is a term that refers to such procedures often found in the financial industry to prevent money laundering, fraud, and various forms of illegal financing.
The procedure might require that a client upload copies of a photo ID and an additional document that gives proof of residence. If KYC was mandated on social media, each account on Facebook, Twitter, etc., would have to be associated with a unique person that couldn't pretend to be an American when really they live in Russia.
Creating bot accounts wouldn’t be possible, unless one were to sacrifice their own account to be used as such, but if automated activity is detected on that account, then that individual will effectively be barred from the platform.
I think there’s a lot of sense in going with this approach, but many of you may have already picked out the flaws. For one, the risk of fraud is low when an average retail customer uploads a photo of their IDs, but if that customer is backed by a powerful government that's hellbent on crushing the west, falsified identity documents are a near certainty.
The other issue, a problem that irritates financial institutions to no end, is that the process is horribly inefficient. A 2016 Thomson Reuters survey estimates the cost of KYC to be a yearly average of $60 million per institution. This is an expense that would be unreasonable for social media platforms considering their global user-base.
While these knocks against KYC may seem overwhelming, the drawbacks are not inherent to the concept–the analog materials that we use as IDs are where the faults lie. There's no need to restrict ourselves to this archaic technology though; in 2020, ecosystems of DIDs exist and adoption has already begun.
Bart Jacobs, considered to be the Netherlands's most influential cybersecurity expert, created the oldest of three DID solutions that I came across prior to starting this article (I've since learned that there are many similar projects in development).
IRMA (I Reveal My Attributes) is the product of Jacobs's Privacy by Design Foundation, and it's a very robust piece of technology. IRMA is a free app for Android and iOS, GDPR compliant, a winner of several awards, the focus of many academic publications, and has already been adopted by several organizations in the Netherlands, such as municipalities and academic institutions.
The app can be thought of as a wallet filled with attributes–small pieces of information that form your identity, such as a passport, age, email address, and name.
Attributes can be issued by any organization and are digitally signed to
efficiently prove their authenticity; modern cryptographic techniques make it harder to forge one of these signatures than it would be to forge a physical ID.
A bank can issue a debit card as an attribute which can then be effortlessly authenticated at any branch or ATM; a government can issue a passport or bus pass; a retail store can issue a virtual loyalty card; and so on it goes.
Blockchain is poised to be a powerful technology that underlies many DID systems. SecureKey Technologies' Verified.Me, and The Sovrin Foundation's Sovrin Network, both operate on the Linux Foundation's Hyperleger blockchain.
Verified.Me has so far focused on providing its solution for Canadians and was created in partnership with IBM Blockchain and Canada's big five banks - not a small alliance by any means. Sovrin Network, which is also supported by IBM Blockchain, considers itself to be a global public utility and is not focused on any one region.
How secure are these technologies though? Detailing this is a little outside the scope of this article, so I'll just say that they're incredibly robust, and more information is a quick DuckDuckGo search away. I do however want to talk about an aspect of privacy with DIDs because it will become relevant further down, and I'll start that with this question: if I need to be 18 years of age to enter a bar, do I need to show the bouncer my whole birthdate?
No, I really don't. In actuality, this a yes-or-no question. If I'm over the age of 18, I just need to provide proof of a 'yes' answer. It's not so easy to do with conventional IDs, but it is for DIDs, and that's called selective disclosure–providing only the bare minimum information necessary to satisfy the request.
How exactly DIDs will be used to stop information manipulation isn't straightforward because social media companies will have some flexibility when integrating them. I'll detail one possible framework to illustrate how this all might work. Twitter will be my example here.
When making an account, Twitter will request ID from a trusted issuer, such as the federal government of your country; everything is digital and automated, so the process of providing your DID for authentication will be real smooth and quick.
This is the basics of proving you are a real and unique human and it is at this stage that bot accounts are filtered out. You can make any Twitter handle and username as you do now, but each account will have an attribute list that's automatically populated by information you choose to disclose from your DID, such as age, location, or even your real name; it's at this stage that someone is prevented from assuming a false identity online.
When your location or nationality must be verified, you can't just claim to be what you're not, and this prevents the average Russian spy from posing as the average American Joe.
What will be required of real users is to make a habit of investigating others' attributes, and to facilitate such habits, the attributes need to be easily accessible, say by including a 'quick-view' button next to a user's thumbnail.
There is flexibility in how precise a location you can reveal thanks to selective disclosure. To maintain some degree of anonymity, one can choose to reveal only their country, but local governments can also issue attributes, so it's possible to verify your location right down to the municipal level.
The ability to do this isn't something that should be overlooked–information manipulation campaigns also occur domestically. Renee has produced research on one such campaign, an anti-vaccination effort, that was orchestrated to challenge a California bill.
Facebook groups were used for strategic planning as many Americans from outside California took on the identity of a concerned Californian to grow dissent against the bill. Verifying your city or state can easily prevent these domestic campaigns.
Anonymity and privacy can be taken a step further with DIDs. To prevent information manipulation, the bare minimum that's needed is to prove that you are a real and unique human that hasn't made an account on the platform before.
It's not necessary for anyone, including the platform you're signing up with, to know anything beyond that, not even who it was that issued your attributes. This is again associated with the concept of selective disclosure, and this is achieved with zero-knowledge proofs, something Sovrin extensively details in their whitepaper.
In doing this, the platform will only know that an issuer they trust has certified you as a unique human, and that's good enough to ensure that bot accounts can't be made. And about those fake personas?
Well, the point at which a user claims to be something or from somewhere, there's no reason left to not prove those claims through verification. Hopefully I'm not the only one that would be suspicious of an account that doesn't do this.
Simply eliminating bots and the ability to assume a fake persona is enough to stop most attack vectors used for information manipulation. This is good news because large social media platforms have been the crux of the issue, but attackers don't need to limit themselves to these platforms; however, neither do DIDs.
The ability to quickly scale to all web platforms and to be flexibly used according to a particular website's needs is just another beautiful property of DIDs.
For example, article authors can include their digital signatures with their publication, and a social media platform can reject the sharing of any article that is not accompanied by a digital signature. Another example is content algorithms that can utilize verified locations to show that a certain event is seeing an organic trend in a certain country.
After spending so many words raving about DIDs, it's time to acknowledge the limitations. First, which agencies should be trusted to verify identities? In countries with low corruption, the government is an obvious and acceptable choice, but how do we trust Russians to be verified if their government is one of the greatest antagonists in this story?
It still wouldn't be possible to create fake personas in this situation, but bots can certainly be made. In extension to that, what about governments conducting information warfare on their own people? In this case it would be possible to create fake personas and bot accounts. Edward Snowden's NSA leaks are proof that governments such as America's cannot be trusted to not do this.
Another issue is that DIDs cannot prevent the organic spread of disinformation and misinformation. For example, if a real author with a verified identity publishes a steaming pile of crap disguised as an 'opinion piece,' and then real people begin to share that article or pieces of it, there's not much that can be done, otherwise you enter the territory of potentially dangerous censorship.
Clearly there’s lots more to think about, but then again, none of these limitations are introduced by DIDs, they already exist right now. DIDs will allow us to put some issues behind us for good and focus on what's left, so they still represent an important step forward in this new frontier of conflict.
We may even be able to use DIDs to solve the remaining problems. The brilliant minds that brought the technology this far may have ideas to take it even further once the problem is clearly presented to them. It’s also important to note that preventing information manipulation is but one application of DIDs, and not even its primary function.
This technology promises to bring solutions to many problems across a variety of industries. The world that is shaped by DIDs will be an incredible one, and you will finally find satisfaction knowing that the person on the other side of the hours-long Twitter spat you had isn’t part of a foreign trolling campaign.
Disclosure: I have no vested interests in any of the digital ID products mentioned.