In 2020, world events like the US general election and COVID-19 pandemic have shown us how both censorship and misinformation can become a political weapon.
With lessons learned, how do we balance freedom of speech in 2021?
Decentralizing online governance may be the answer.
In the Internet era, it’s almost impossible to censor information completely. There are always ways for stories and pictures to slip through the digital cracks, with voices emerging from beyond the firewalls of China, Uganda and Iran.
But censorship itself is just a tool. Different governments and corporations use it in different ways, and for different reasons.
How it’s applied in our social and political lives is what makes all the difference.
Censorship is the restriction or suppression of certain media and speech.
This is often because it is considered wrong, or too violent, sexually explicit or profane for specific audiences. This is meant to protect young children from content that is too mature for them. It can also protect groups from hate speech.
But censorship also denies entire populations of basic human rights and freedoms, such as freedom of information and the press. And online censorship and surveillance is only expected to get worse this year.
In the second week of January alone, the Ugandan government authorised a complete internet shutdown to aid its own political agendas. There is an expectation that Ethiopia will also suffer from an internet blackout as their own election approaches.
Censoring critical information can even contribute to and catalyse catastrophic global events.
For most countries, there has been a lack of government oversight and intervention, allowing the internet to evolve as a free market space.
China is a notable exception, where the internet has been heavily censored from its inception there. Other countries were slower to understand the impact of the internet and have begun to withdraw their citizens' access.
In many liberal democracies, however, information has been allowed to flow freely across borders - no matter how inaccurate, offensive or dangerous.
Companies and online platforms have largely benefited from a lack of regulation as the internet has scaled over the last 30 years. The internet has therefore become a breeding ground for conspiracies, fake news and misinformation.
QAnon is a notorious conspiracy theory network that has thrived online.
Research has shown that users on Facebook create their own echo chambers, selecting information that reinforces their belief systems to form polarised groups. This behaviour, known as confirmation bias, “dominates information cascades and might affect public debates on social relevant issues.”
According to the World Economic Forum, “massive digital misinformation remains one of the main threats to our society”, representing a political, social, and economic risk. It can be particularly harmful when it comes to matters of our health, such as the shaping of our responses to COVID-19.
Misinformation can also derail our democratic processes, shown during the most turbulent recent US election in history, where baseless claims of widespread voter fraud are being investigated - at taxpayer cost.
The responsibility to manage this misinformation has now fallen on the social media platforms themselves, as it becomes clear that even our governments would rather promote fiction than fact.
But right now, in the aftermath of a failed civilian coup that led to a storming of the US capitol and five related deaths, this "management", known as moderation, has become an incredibly divisive topic.
Twitter’s de-platforming of the President of the United States at the time left many questioning why corporations have so much power when it comes to silencing certain kinds of speech, while elevating others. In this unprecedented case, the justification for Trump’s removal was to prevent further violence and deaths.
But allowing private entities the right to choose the flow of information and opinion is a slippery slope.
With the introduction of social media, how we consume and trust information has rapidly changed. Studies suggest that political and civil discourse in the US is being diminished by something called "Truth Decay". Among other things, this is defined as an increasing disagreement about facts and interpretation of data, and the blurring of lines between opinion and fact.
This is, in part, due to the disintermediation of information; where journalists would have traditionally communicated what was “happening” in the world, we can now access and even become sources of news ourselves.
While citizen journalism has helped us keep our governments accountable and document issues of human rights abuses, it has also enabled false narratives to be created and easily spread.
User-created misinformation continuously dilutes trust in science and research, obscuring fact from fiction. There have been some efforts from various groups and organisations to cut down this kind of misinformation. The UN launched its “take care before you share” campaign, and Instagram and Twitter now warn their users about posts deemed inaccurate by fact checkers.
However, this has done little to meaningfully address rampant online misinformation. In fact, research says that “falsehood consistently dominates the truth on Twitter... fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.”
So while social media seems to “systematically amplify falsehood at the expense of the truth” it also seems that no one has figured out how to reverse this trend (yet).
As gateways to the world’s information, internet giants now take center stage in the censorship debate. A number of congressional hearings have investigated the capacity of these businesses to influence the opinions and perceptions of the public.
The Cambridge Analytica scandal of 2016 uncovered Facebook's ability to impact political races. Psychological profiles of its users were sold to political campaigns in the US and the UK. This was the first glimpse into the darker abuses of our personal data, and how social media can be weaponized by both businesses and political parties alike.
Four years later, a different kind of battle is now taking place, one between the politicians and platforms themselves. In November of this year, several tweets posted by former US president Donald Trump were flagged and removed by Twitter, who found them to be “misleading about an election or other civic process.”
Among the most controversial tweets was Trump’s claim that he lost the election due to widespread voter fraud in states like Pennsylvania. The removal of these tweets angered Trump, who claimed that Twitter was censoring his speech in an act of political partisanship. However, reports state that Trump himself is the largest source of election misinformation.
Then, on January 6th, just weeks before the inauguration of President-elect Joe Biden, Trump’s refusal to support a peaceful transfer of power escalated to unimaginable consequences. Following a speech given at his own rally, during which he is accused of encouraging and inciting deadly violence, a pro-Trump protest group stormed the US Capitol in Washington DC. As a result, 5 people died, including one police officer.
While there were calls for Trump to de-escalate the situation and acknowledge his defeat, he seemingly did the opposite, even calling the rioters “special”. After several warnings, Twitter suspended Trump from the platform altogether, stating the former President had violated their Glorification of Violence Policy and deactivated the account due to the risk of inciting further of violence.
Other companies swiftly followed suit - Facebook, Instagram, YouTube and Reddit, amongst others, banned Trump from their platforms.
The intention was to slow down the spread of misinformation and “prohibit content that promotes hate, or encourages, glorifies, incites, or calls for violence against groups of people or individuals.”
Google-owned YouTube recently announced that it will remove any videos alleging that "widespread fraud, errors, or glitches changed the outcome of any past US presidential election." This new policy will purge the platform of any videos suggesting that Trump did not win the election fairly, or which suggests evidence of widespread voter fraud that changed the result of the election.
The platform has already removed 8,000 channels and thousands of "harmful and misleading elections-related videos" for violating its existing policies.
Even Microsoft threw shade at Trump following his de-platforming...
In response to these policies, alternative social media sites have been emerging. Rather than resisting the tide of misinformation and hate speech, some choose not to curate content at all.
Parler, now infamously known as one of the key platforms where the Capitol attack was coordinated, only allows volunteers to moderate user content. Unlike traditional social platforms, Parler let people say whatever they are legally within their rights to say. Notably, the platform was favoured by right-wing supporters and white supremacists.
Following the Capitol attack, Parler was soon taken offline by AWS cloud hosting service. This sparked controversy, with many arguing that this was a blatant attack on free speech. Parler has since filed an antitrust lawsuit against Amazon, claiming the move was motivated by political animus.
Amazon responded that Parler’s removal was due to their “demonstrated unwillingness and inability to remove content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens. There is no legal basis in AWS’s customer agreements or otherwise to compel AWS to host content of this nature.”
This lawsuit delves into the most ethical of grey areas. It’s important to realise that in the eyes of the law, freedom of speech is not an absolute right.
Most legal systems around the world set limits on the freedom of speech, especially in cases when it conflicts with other rights and protections, such as in the cases of libel, slander, pornography, intellectual property or inciting violence. Therefore, private companies break the law when they allow the coordination of domestic terrorist attacks to take place on their platforms.
On a larger scale, many groups in society regularly experience discriminatory and violent acts towards them online. Online hate speech has even been linked to an increase in real-world violence toward minorities, including mass shootings, lynchings, and ethnic cleansing.
Experts say that through the internet, “individuals inclined toward racism, misogyny, or homophobia have found niches that can reinforce their views and goad them to violence.” Social media posts, and other online speech, “can inspire acts of violence.” Platforms have therefore developed ways to detect and mitigate such hate speech. These companies often shape their codes of ethics based on laws, whether moral or judicial, to protect these groups.
Companies are within their rights to remove content based on their policies and to prevent dangerous content.
But their ability to curate speech, truth and information is, at its core, dangerous in itself.
To fight back against the “unchecked power” of these giant corporations, US congress attempted to rewrite the rules altogether. Under Trump’s administration, the US Justice Department recently proposed a reform of Section 230 of the Communications Decency Act, which made companies immune from any legal accountability for the content their users shared. The law also protects the moderation of content, meaning platforms can remove posts that breach its own terms of service, so long as it is acting in good faith.
It’s seemingly this last caveat which law-makers take issue with, as they say that these rules “disproportionately censor conservative speech”.
Censorship in media has traditionally only been afforded to certain regulatory bodies, such as the Federal Communications Commission (FCC) in the United States. These organisations create and enforce standards of what is considered decent and "indecent" on free-to-air broadcasting.
But there is no way to easily control content and information online. Social media companies have instead become “arbiters of truth”. (Interestingly, numerous courts have ruled that social media sites do not function as “21st century equivalent of a public square.”)
In a congressional hearing on Section 230, Zuckerberg spoke out about the role of social media platforms in promoting freedom of speech - but in a carefully limited capacity. He explained how Facebook helped during the 2020 US general election, trying to prevent voter obliteration while fighting against several efforts to delegitimize elections.
He added, “Sometimes the right thing to do from a safety or security perspective isn’t the best for privacy or free expression, so we have to choose what we believe is best for our community and the world.”
We've been here before - Zuckerberg testifies in 2018.
But it is this specific ability to choose what is best for the community that is troubling to many, regardless of their political affiliations. Private companies such as Twitter and Facebook hold too much power through the art of algorithmic selection and moderation.
Even Jack Dorsey, Twitter’s CEO, agrees that corporate censorship sets a dangerous precedent. It normalises the behaviour of businesses to set the social standards when it comes to freedom of information and expression. With Trump’s deplatforming, we could see the end of an “era of free speech online that Twitter had itself helped create”.
The vision of an open internet comes under threat when we allow our freedoms to fall under the jurisdiction of Silicon Valley chiefs. In this way, technology companies are becoming “even more powerful than governments.”
And while these internet giants have been able to flourish in the wild, their centralized foundations have tied us all up in a complex web that is hard to untangle.
If we want both an open internet, and a future with less hate, racism and violence, a “key mistake is looking for solutions within the existing structure.”
Just as banning Trump won’t silence his ideologies, or even reduce the capacity of individuals to incite violence, current infrastructure cannot solve our problems.
Outdated, proprietary technology got us into this mess. But even more outdated technology can get us out of it.
Governance and power; private companies have too much of it, and politicians want more of it.
Businesses intermediate all our data, the most precious resource of the new century. It’s where their power comes from - this control of the world’s information - and it’s been done mostly without regulation or oversight.
And they really stuffed it up.
The fundamental thesis of the Internet was about empowering the end users. Sadly, this “free-thinking, decentralized spirit of the early internet gradually morphed into the sort of monolithic culture its pioneers had sought to rebel against.”
What was born out of “communities of shared knowledge”, has regressed into a world where power is fully concentrated. There has been no better alternative until now.
A return to a self-governing internet may seem a far-fetched utopian dream to most. The handful of corporations who rule the internet have always been the “filters that the entire web flows through...a layer in the internet stack; one that sits between us and the old idea of the world wide web.”
There are thriving examples of online self-governance in action. Though imperfect, Wikipedia is the most exemplary. As one of the “largest experiments in democratized moderation”, this global information sharing platform proves that community-powered mediums are not only possible, they are successful.
Wikipedia is self-regulated, with all pages peer-reviewed, fact-checked, edited and moderated by its community of volunteer contributors. Everyone must cite credible sources. If someone posts something incorrect or misleading, other contributors flag and remove it, holding them accountable. They also have the opportunity to “debate fiercely” and decide what information makes it to a Wikipedia page so that it remains truthful and accurate.
This democratized approach is the purest example of how content moderation could be applied even to the largest of social networks. It would prevent Big Tech from setting the terms of speech for everyone else. Perhaps then we could actually “hold Facebook, YouTube, Twitter, and even Parler accountable for all the ways their products have been used to incite violence online and in the real world.”
On a practical level, many projects are already deconstructing the internet, redistributing power and placing users once again at the top of the digital pyramid. Many of these projects have been inspired by the decentralized architecture of blockchain and cryptocurrency, which is censorship-resistant, open and permissionless by default.
Even Twitter is funding a decentralized social media project.
With these new tools and networks, there are numerous ways for citizens, protesters, activists and journalists to stay connected and secure. This includes Mysterium Network - the company I work for - a decentralized network powered by everyday people. It helps people all around the world bypass censorship to access the internet and share information securely.
Censorship’s role in the digital age is harder than ever to justify. In a centralised world, it’s currently the only effective way to stop the spread of misinformation and hateful recourse, and to protect communities from both online and real world violence.
But how do we also protect our freedom of speech? How do we protect the Internet itself from the grip of self-interesfed corporate executives calling all the shots?
No amount of corporate policy can replace the power and voice of a democratic collective, properly connected and reaching consensus together.
That’s how we can all have the final word on free speech.