Politics in Social Media: Bots, IDM and Decentralized Moderation by@nerv

Politics in Social Media: Bots, IDM and Decentralized Moderation

June 6th 2022 9,675 reads
Read on Terminal Reader
react to story with heart
react to story with light
react to story with boat
react to story with money
The question is not whether we are discussing politics over the internet or not, but rather if we are doing it in the right way. Most official city portals rely heavily on "insecure" social media platforms like Facebook, Twitter, YouTube and Flickr to engage with the public. The issue with most internet platforms is that anyone can claim to be whoever they want and few are the mechanisms to verify the identity of the person behind the computer. In 2021 the invasion of the capitol by an angry mob in the USA is yet another example of the power of these platforms.
Network Emergency Response Volunteers HackerNoon profile picture

Network Emergency Response Volunteers

We are a DAO committed to help societies become less autocratic by developing digital tools in governance

github social icon

The internet has become the agora[1] of modern times. Where in the old days people met at physical and specific geolocations to discuss political matters and other news, usually in a central square in the middle of the city or in amphitheatres at the periphery, today we do so online on social media platforms.

The question is not whether we are discussing politics over the internet or not (we obviously are), but rather if we are doing it in the right way.[2] Most official city portals rely heavily on "insecure" social media platforms like Facebook, Twitter, YouTube, and Flickr to engage with the public, and very few incorporate complex participation tools such as forums, chat rooms, e-voting, etc., that could otherwise allow for more direct and consequential interaction.



Mass protest in Tunisia and birthplace of the Arab Spring which culminated in the exile of former President Zine El Abidine Ben Ali to Saudi Arabia. Nine out of ten Tunisians used Facebook to organize the protests.

Most of the mass protests that have erupted since the early 2010s[3],[4], from Hong Kong[5] to Algeria and Lebanon, from riots in Portland, USA, to anti-government protests over covid measures around the world[6], all were organized on-line with laptops and smartphones, inspired by hashtags and coordinated through social networks.

Facebook pages were created to raise awareness of crimes committed against humanity, such as those relating to police brutality during the Egyptian Revolution, which culminated with the death of Khaled Mohamed Saeed or the death of George Floyd represented by the Black Lives Matter movement in the USA. In 2021 the invasion of the capitol by an angry mob in the USA is yet another example of the power of these platforms.[7] In response, many were the times when public authorities blocked or temporarily disrupted access to social media platforms. In 2018 alone there were 128 documented mandated internet shut-downs.


In 2021, the U.S. Capitol, where the national congress musters, was overrun by an angry mob that questioned the legitimacy of the electoral results for the presidential elections. Prior to the event, there were more than a million mentions of invading the capitol on social media on "alt-tech" platforms such as news aggregator site Patriots.win, chat app Telegram and microblog sites Gab and Parler.

The issue with most internet platforms is that anyone can claim to be whoever they want and there are few mechanisms in place to verify the identity of the person behind the computer. We can self-name ourselves by entering a nickname and we can create as many accounts as we want because there is usually nothing that binds a digital identity to a real person. This is both good and bad. On one side it allows for the free flow of information that could otherwise be more easily repressed. On the other, if there is no mechanism for accountability or traceability, i.e. if users cannot attest to the authenticity of their publications, interesting tools such as online voting become unavailable and the publication of harmful content[8] is encouraged because in general there are no penalties.

What happens in a world without a digital identity?

The web goes dark, in a sense.[9] It becomes an environment with weak moderation. This shouldn't come as a surprise to anyone, after all, it's the internet we're talking about. What can be a surprise is that social media networks like Facebook are classified as highly insecure.

Prostitution is rampant on the platform.[10] Bots[11] that spread fake news[12] are also out-of-control.[13] Twitter is no different![14] These social bots can stage conversations similar to humans and often go unnoticed by the less attentive users.[15] Their ability to influence human behaviour should not be underestimated.

Research[16] clearly shows that bots accounted for about 11% of all hate tweets analysed in online conversations about controversial policies related to the Israel/Palestine and Yemen conflicts. They can influence our thinking and define narratives![17],[18] At the same time we have over one million users accessing Facebook completely anonymously.

[19] All of these points should not be regarded as cheap criticism. It is consensually difficult to monitor content on the internet.[20] But awareness of these facts and the characteristics that define the platforms that we use is important. The claim isn't that content on the internet, in general, is impossible to moderate. It could be.

[21] The task however is immensely difficult and often requires state powers and resources to be effective, at least under the current paradigm. Artificial intelligence algorithms are improving by the day and can make the task easier.[22],[23] But just as AI techniques become more advanced, bots become better at manipulating humans as well. There has to be a better way to be on the internet.


Source: https://www.socialsamosa.com/2017/05/ukraine-russia-twitter-feud/

Another trilemma

So for those who are deep into Web3, at some point or another during your incursion you've probably come across the so-called blockchain trilemma, coined by the creator of Ethereum, summarizing the fundamental problem of blockchain, the underlying architecture of Web3, as the need to find an equilibrium and reconciliation between three apparently mutually exclusive concepts: scalability, security, and decentralization.

Parallel to this trilemma perhaps is another one lurking around of equal importance, not related to the underlying software-hardware architecture, but which tries to reason the way that we as humans use the internet and how we can take advantage of it without falling into traps and pitfalls. Essentially stopping the world wide web from becoming the world wild west, the other ‘lemmas’ are moderation, identity, and privacy.

I think we can all agree that we need some form of content moderation online. To achieve this end, some form of identity needs to be established. But we also want to keep a certain level of pseudonymity for when we want to publish under an alias and not a real name and for this a special type of identity is necessary. Just as proof-of-stake and sharding were proposed as part of the solution to the blockchain trilemma, hopefully, there is a solution to this other triad.

Casting light into the darkness

The middle ground where responsibility can be attributed to users while at the same time preserving their anonymity is this: proper identity management[24] (also known as IDM). The idea behind IDM is to clearly distinguish between identity attestation from user account registration and later authentication (login), with intelligent cryptographic tricks and separation of roles between the participating entities.

Vulnerabilities arising from poor implementations of IDM are a famous problem and a world in itself.[25] Since we don't expect the reader to be a cybersecurity expert, we'll spare the details. Suffice it to say that yes, it is possible to have IDM with cybersecurity. If programmed correctly, it allows users to navigate in safe environments so long as a trusted authority in charge of managing what the cybersecurity community calls digital certificates is not corrupted (HTTPS being the most well-known example). In fact, we use this kind of procedure in all public institutions; whether we are talking about the issuance of FIAT money, the counting of votes during the elections, the issuance of travel passports and border control, etc…[26], always some level of identity control is necessary.

It is true that this architecture depends on the good behaviour of such trusted institutions and breaches, although uncommon, do exist. But as a matter of fact, the task entrusted is rather simple: to attest to the authenticity of an identity by publishing a certificate without disclosing

sensitive data unnecessarily.

Of course, there are ways to mitigate flaws in the design of these mechanisms in what has come to be known as a web-of-trust, where trust is shared between multiple entities. There are also other more exotic ideas in the cooking such as those making use of pseudonymous parties,

also known as validation ceremonies if you will, coupled with special forms of social gamification. We shall discuss this question in more detail in an article of its own as it merits a deeper look.

Putting aside issues of trust and details about identity management... in order to understand the links between cyberpolitics, online voting, and accountability, or put differently, between moderation, identity, and privacy, it may be helpful to clarify some ideas about how content is usually presented and moderated in conventional social media networks; from the prevention of the spread and encouragement of violence to disinformation and censorship[27]; how these phenomena arise and what can be done to mitigate harm.

Evidently, any platform designed to host political discussions (which is to say; any platform where people can discuss freely) should not rely on fact-checkers or any other form of moderation that is delegated to entities outside the community itself, for this is doomed to long-term failure, for obvious reasons![28]

A better strategy might be to empower users of the community to self-moderate.[29],[30] The reason why we should trust an authority to manage the attestation of our identities but not to moderate online content has to be stated clearly; first, it detaches identity from moderation which would otherwise entail too much concentration of power if both were to be at the hands of the same authorities.

Secondly, it is easier to monitor a tab of identities, credentials, and some proof of identity such as a birth certificate, than it is to monitor all the content that is published online every day. Hence it makes sense to delegate one process and not the other. Interesting to note, the collective intelligence of a community is usually more powerful than the intelligence of the individual,[31],[32] although this should be taken with a pinch of salt as the collective behaviour of masses is, as all human action, not an exact science. Such self-moderation could range from a simple .ignore function to a complex ranking system where people upgrade or downgrade publications directly.

A recent online petition sent to UK authorities called for a safer social media environment by precisely implementing the digital identity we have been talking about. It was promptly answered negatively.[33] Allegedly, providing digital identities by means of digital certificates and implementing them on social media platforms, even if coupled with pseudonymous aliases could increase cybersecurity vulnerabilities and be counter-productive, allegedly.

But the alternative; our current system, is it any better as it is anyway? Furthermore, can this reluctance of public institutions to provide a digital identity service have any parallel with the general reluctance into accepting cryptocurrencies?[34] Are our elected officials qualified and honest enough to decide on such technical matters?

Buyer's remorse?

Even more recently, a $44,000,000,000 deal was put on hold precisely over these issues.[35] Elon Musk, the world’s richest man, offered 44 billion dollars to buy Twitter on the assumption that at most 5% of its user base was fake (composed of bots or of users with multiple accounts).

However, the deal was put on hold as mister Elon estimated that the real number ought to be at least 20% or 4 times higher. Regardless of the exactness of the number, this is a hot topic in today’s world and is surely not to go away until clear positions on these matters are taken. We should not expect that all platforms need to be absolutely cyber secure in the context that we have been discussing. It is even desired that some actually are not. But those where political discussions are taken seriously should be.​


A publication by the European Commission commenting the situation with the conflict between Ukraine-Russia as seen accessing www.pleroma.pt which is a portuguese instantiation of a federated ActivityPub protocol, a Web3 tool. In this platform, no strong IDM protection is used.

What I would like to take from the reader at the end of this article is the understanding that politics through digital tools is not only a cyber dream of the future but is already happening and the reason why we urgently need a safer cyberspace. Security here doesn't mean better ways to keep your password safe. It goes way beyond that. It means, among other things, asking users to be moderators as well. It means relying on third parties to issue and protect our credentials and to continuously supervise the behaviour of these authorities. The alternative to creating this safe environment is bots and fact-checkers taking over our political discourse, potentially deceiving the public with false narratives.

And that, I think, we don't want for sure.

Authored by @nerv

Links to Sources

[1] "Agora" entry on Wikipedia, https://en.wikipedia.org/wiki/Agora

[2] "Democratic Principles for an Open Internet" by the Open Internet for Democracy initiative,  https://openinternet.global/themes/custom/startupgrowth/files/OpenInternetBooklet_english.pdf

[3] "Arab Spring" entry on Wikipedia, https://en.wikipedia.org/wiki/Arab_Spring

[4] "Facebook and Twitter key to Arab Spring uprisings: report" by Carol Huang, The National, UAE, 2011, https://www.thenationalnews.com/uae/facebook-and-twitter-key-to-arab-spring-uprisings-report-1.428773/

[5] "2019–2020 Hong Kong protests" entry on Wikipedia, https://en.wikipedia.org/wiki/2019–2020_Hong_Kong_protests#Online_confrontations

[6] "Protests against responses to the COVID-19 pandemic" entry on Wikipedia, https://en.wikipedia.org/wiki/Protests_against_responses_to_the_COVID-19_pandemic#Organisers_and_methods

[7] "2021 United States Capitol attack" entry on Wikipedia, https://en.wikipedia.org/wiki/2021_United_States_Capitol_attack#Planning

[8] "We’re underestimating the role of social media in mass shootings, and it’s time to change" on TNW, 2019, https://thenextweb.com/news/were-underestimating-the-role-of-social-media-in-mass-shootings-and-its-time-to-change

[9] Dark Web entry on Wikipedia, https://en.wikipedia.org/wiki/Dark_web

[10] "Prostitution on Facebook" entry on Without Bullshitblog, 2021, https://withoutbullshit.com/blog/prostitution-on-facebook

[11] Eugene Goostman entry on Wikipedia, https://en.wikipedia.org/wiki/Eugene_Goostman

[12] "Social bots – the technology behind fake news" entry on IONOS website, 2018, https://www.ionos.com/digitalguide/online-marketing/social-media/social-bots/

[13] "Facebook has shut down 5.4 billion fake accounts this year" by Brian Fung and Ahiza Garcia, CNN Business, 2019, https://edition.cnn.com/2019/11/13/tech/facebook-fake-accounts/index.html

[14] "5 things to know about bots on Twitter" by Stefan Wojcik for Pew Research Center, USA, 2018, https://www.pewresearch.org/fact-tank/2018/04/09/5-things-to-know-about-bots-on-twitter/

[15] "Interview with Eugene Goostman, the Fake Kid Who Passed the Turing Test" by Doug Aamoth for Time, 2014, https://time.com/2847900/eugene-goostman-turing-test/

[16] "Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arabic Social Media" by Nuha Albadi et al., University of Colorado Boulder, USA, 2019, https://arxiv.org/pdf/1908.00153.pdf

[17] "Bots, social networks and politics in Brazil" by Diretoria de Análise de Políticas Públicas, 2017, http://dapp.fgv.br/wp-content/uploads/2017/08/EN_bots-social-networks-politics-brazil-web.pdf

[18] "How bots are influencing politics and society" on Thomson Reuters Foundation, YouTube, 2020, https://www.youtube.com/watch?v=Xl0TrA8oXXo

[19] "A million people now access Facebook over Tor every month" by Kavvitaa S. Iyer on Techworm, 2016, https://www.techworm.net/2016/04/million-people-now-access-facebook-tor-every-month.html

[20] "The Trauma Floor: The secret lives of Facebook moderators in America" by Casey Newton on The Verge, 2019, https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

[21] "Police make 61 arrests in global crackdown on dark web" by Warwick Ashford on TechTarget, 2019, https://www.computerweekly.com/news/252460271/Police-make-61-arrests-in-global-crackdown-on-dark-web

[22] "Porn: you know it when you see it, but can a computer?" by Bijan Stephen on The Verge, 2019,  https://www.theverge.com/2019/1/30/18202474/tumblr-porn-ai-nudity-artificial-intelligence-machine-learning

[23] "Violence Detection Using Spatiotemporal Features with 3D Convolutional Neural Network" by Fath U Min Ullah et al., Sejong University, Seoul, Republic of Korea, 2019, https://mdpi-res.com/d_attachment/sensors/sensors-19-02472/article_deploy/sensors-19-02472.pdf

[24] "Identity management" entry on Wikipedia, https://en.wikipedia.org/wiki/Identity_management

[25] "Sybil attack" entry on Wikipedia, https://en.wikipedia.org/wiki/Sybil_attack

[26] "What Is a Certificate Authority (CA) and What Do They Do?" on Hashed Out, 2020,  https://www.thesslstore.com/blog/what-is-a-certificate-authority-ca-and-what-do-they-do/

[27] "Who Fact Checks the Fact Checkers? A Report on Media Censorship" by Phillip W. Magness et Ethan Yang, American Institute for Economic Research, USA, 2021, https://www.aier.org/article/who-fact-checks-the-fact-checkers-a-report-on-media-censorship/

[28] "Red tape" entry on Wikipedia, https://en.wikipedia.org/wiki/Red_tape

[29] "Decentralised content moderation" by Martin Kleppmann on personal blog, 2021, https://martin.kleppmann.com/2021/01/13/decentralised-content-moderation.html

[30] "Designing decentralized moderation" by Jay Graber on Medium, 2021,  https://jaygraber.medium.com/designing-decentralized-moderation-a76430a8eab

[31] "Crowdsourcing civility: A natural experiment examining the effects of distributed moderation in online forums" by Cliff Lampe et al., USA, 2014, https://www.academia.edu/24467177/Crowdsourcing_civility_A_natural_experiment_examining_the_effects_of_distributed_moderation_in_online_forums

[32] "Why Collective Intelligence Beats Individual Intelligence" by Pawel Brodzinski on author's blog, 2018, https://brodzinski.com/2018/10/collective-intelligence-individual-intelligence.html

[33] "Make verified ID a requirement for opening a social media account" e-petition on Petitions, UK Government and Parliament, 2022, https://petition.parliament.uk/

[34] "Is bitcoin legal? Cryptocurrency regulations around the world" by Tim Falk, Finder, USA, https://www.finder.com/global-cryptocurrency-regulations

[35] "Elon Musk says Twitter deal 'cannot move forward' until he has clarity on fake account numbers" by Sam Shead, CNBC, 2022, https://www.cnbc.com/2022/05/17/elon-musk-says-twitter-deal-cannot-move-forward-until-he-has-clarity-on-bot-numbers.html

react to story with heart
react to story with light
react to story with boat
react to story with money
Network Emergency Response Volunteers HackerNoon profile picture
by Network Emergency Response Volunteers @nerv.We are a DAO committed to help societies become less autocratic by developing digital tools in governance
Read my stories
. . . comments & more!