paint-brush
Big Tech’s Coronavirus Response Paves a New Path for Anti-Misinformation Effortsby@bennat-berger

Big Tech’s Coronavirus Response Paves a New Path for Anti-Misinformation Efforts

by Bennat BergerJune 23rd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Social media giants have long faced accusations of being too hands-off when it comes to policing fake news. In recent weeks, major companies have started working in tandem to prevent the spread of COVID-related misinformation. Facebook has begun posting information cards that direct information-seekers to sources like the World Health Organization or local authorities. With COVID, tech platforms are finding themselves just as responsible for protecting lives as any doctors on the front lines. In this way, we can see moderation as a necessity -- something that literally has the power to save lives.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Big Tech’s Coronavirus Response Paves a New Path for Anti-Misinformation Efforts
Bennat Berger HackerNoon profile picture

Dealing with misinformation has always been, let’s say, a touchy subject for Big Tech. 

Social media giants have long faced accusations of being too hands-off when it comes to policing fake news. Perhaps predictably, the one most under fire is Facebook -- a company that has long held to the PR line that as a social media platform, it has little to no responsibility for what outside users choose to post. 

“[Zuckerberg] insisted repeatedly that Facebook was a platform, not a publisher,” One reporter for the New Yorker explains of Facebook’s longstanding disavowal of its moderation responsibilities. “A publisher, after all, could be expected to make factual, qualitative, even moral distinctions; a publisher would have to stand behind what it published; a publisher might be responsible, reputationally or even legally, for what its content was doing to society.

But a platform, at least according to the metaphor, was nothing but pure, empty space.” To moderate what constitutes factual content, tech leaders like Zuckerberg argued, would be to infringe on users’ right to free speech.  

Faced with the reality of a global pandemic, however, such philosophies have begun to change. In recent weeks, major companies have started working in tandem to prevent the spread of COVID-related misinformation. 

On March 16th, a tech cohort that included Google, Facebook, Microsoft, Linkedin, Reddit, Youtube, and Twitter released a joint statement announcing that they were “working closely together on COVID-19 response efforts [...]

We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world. We invite other companies to join us as we work to keep our communities healthy and safe.”

The details have differed across platforms, but the goal -- to protect users from misinformation -- has been the same. Facebook has begun posting information cards that direct information-seekers to sources like the World Health Organization or local authorities; Twitter recently shared that it would provide non-governmental organizations with the advertising credits necessary to support public health campaigns.

Reddit informed its users that it would “quarantine” communities that share hoax or misinformation content by removing them from search results and warning visitors of the problematic content. 

But why the sudden reversal on moderation, after all this time? With COVID, tech platforms are finding themselves just as responsible for protecting lives as any doctors on the front lines. 

“We’re not just fighting an epidemic; we’re fighting an infodemic,” World Health Organization chief Tedros Adhanom Ghebreyesus shared during an address at the Munich Security Conference in February. “Fake news spreads faster and more easily than this virus, and is just as dangerous.”

Ghebreyesus isn’t hyperbolizing. Dangerous fake headlines incite readers into panic and ill-advised behavior.

Some warn readers of impending societal collapse and tell them to grab cash and food while they can; others take the opposite tact, dismissing the risk posed to young people, and encourage them to gather despite social distancing.

More fraudulent posts provide false reassurance that readers can protect themselves from disease by drinking teas, ingesting colloidal silver, and diffusing essential oils. One viral post claimed that stomach acid would kill coronavirus if a person just drank enough water. 

These false headlines are dangerous not only for the risk they pose to individuals but also for the harm they can incite across communities. If people think that they can cure their illness by drinking tea and diffusing essential oils, they might dismiss the need for social distancing and infect countless others -- and put themselves at risk in the process.

Take the myth that young people can’t get sick as an example. Despite the evidence that nearly one in five people hospitalized with coronavirus in the U.S. are between the ages of 20 and 44, many young people have disregarded the warning.

CNN recently reported that at least one young adult caught the illness after attending a “coronavirus party” with their peers. The report notes that “the partygoers intentionally got together ‘thinking they were invincible’ and purposely defying state guidance to practice social distancing.”

Misinformation is dangerous. It poses a health risk that could impact not only the people who act on wrong information, but their neighbors, friends, and community members.

The potential for harm is so apparent that even tech companies, who are notorious for avoiding responsibility as moderators, feel compelled to step in and guide users towards reliable information. In this way, we can see content moderation as a necessity -- something that quite literally has the power to save lives. 

But what if we were to extrapolate that same awareness out to public health and safety concerns beyond COVID-19? If our experiences with the coronavirus spread show us anything, it would be that providing access to reliable sources and moderating fake news isn’t about infringing on free speech; it’s about protecting our communities. 

The risk that other varieties of misinformation pose to health and wellbeing is well-established. The proliferation of manufactured headlines has been found to skew elections, promote misleading information about the safety of vaccines and cancer care, and even amplify denialist climate change beliefs.

Imagine a future where tech companies employ teams of researchers who are dedicated to continually debunking hoaxes, flagging pseudoscientific advice, and continually researching topics related to public health; imagine the good that could stem from common-sense moderation. 

Tech companies have shown that when pressed, they can do a fantastic job of guiding users towards reliable information. So, can’t they do this with misinformation spread about vaccine safety, false cancer cures, climate change denial, and fake political news?

The dangers that misleading information about these topics pose may not be as immediate as the ones we see with COVID-19, but they are nonetheless real and pressing.