paint-brush
Effective Methods for Combating Fake News: Censorship, Debunking, and Prebunkingby@editorialist
340 reads
340 reads

Effective Methods for Combating Fake News: Censorship, Debunking, and Prebunking

by THE Tech Editorialist
THE Tech Editorialist HackerNoon profile picture

THE Tech Editorialist

@editorialist

Crafting compelling perspectives in media, sparking thought-provoking discussions on diverse...

June 14th, 2024
Read on Terminal Reader
Read this story in a terminal
Print this story

Too Long; Didn't Read

This paper classifies misinformation countering methods into censorship, debunking, prebunking, and identification. It focuses on optimizing prebunking using epidemic models to deliver factual information before misinformation spreads, minimizing user disruption.
featured image - Effective Methods for Combating Fake News: Censorship, Debunking, and Prebunking
1x
Read by Dr. One voice-avatar

Listen to this story

THE Tech Editorialist HackerNoon profile picture
THE Tech Editorialist

THE Tech Editorialist

@editorialist

Crafting compelling perspectives in media, sparking thought-provoking discussions on diverse opinions and viewpoints.

Learn More
LEARN MORE ABOUT @EDITORIALIST'S
EXPERTISE AND PLACE ON THE INTERNET.
0-item

STORY’S CREDIBILITY

Academic Research Paper

Academic Research Paper

Part of HackerNoon's growing list of open-source research papers, promoting free access to academic material.

Author:

(1) Yigit Ege Bayiz, Electrical and Computer Engineering The University of Texas at Austin Austin, Texas, USA (Email: egebayiz@utexas.edu);

(2) Ufuk Topcu, Aerospace Engineering and Engineering Mechanics The University of Texas at Austin Austin, Texas, USA (Email: utopcu@utexas.edu).

Abstract and Introduction

Related Works

Preliminaries

Optimal Prebunking Problem

Deterministic Baseline Policies

Temporally Equidistant Prebunking

Numerical Results

Conclusion and References

A. Countering Misinformation

We classify misinformation countering methods into four categories, censorship, debunking, prebunking, and identification, The first three categories all attempt to reduce the impact of misinformation. Censorship refers to any method, which aims to curb misinformation spread by attempting to control the propagation of information in the network [7], [8]. Censorship is common in social networking platforms, yet it raises significant issues relating to the freedom of speech.


Debunking refers to correcting misinformation by providing users with correct information after the misinformation has already spread, whereas prebunking refers to issuing correct information before misinformation propagates. An automated example of debunking is the numerous automated fact-checking methods that all aim to debunk misinformative text content [9], [10]. The current understanding of social psychology indicates prebunking to be superior to debunking in terms of its effectiveness in countering misinformation [4], [6], [11]. In this paper, we contribute to the automation of prebunking by developing algorithms for automatically optimizing the delivery times of prebunks to the users in a social networking platform.


Identification refers to any method that aims to detect misinformative content within a social network. These models often utilize natural language processing models [12], [13]. Del Vicario et al. [14] has shown that the propagation characteristics of misinformation admit detection without relying on content classification. More recently, Shaar et al. [15] introduce a method with which to identify already fact-checked claims. In this paper we do not use misinformation detection directly. However, we assume we already know the misinformation content, thus accurate misinformation detection remains a prerequisite for the methods we present.

B. Rumor Propagation Models

Determining optimal times for prebunking deliveries requires accurate estimations for when the misinformation will arrive to the user of interest. This estimation requires a rumor propagation model. In this paper, we rely heavily on epidemic models [16], also known as compartmental models, to model misinformation propagation. As their name suggests, these models are based on epidemiology, and model rumor propagation by partitioning users into different categories, such as susceptible, or infected, and then define rules by which these partitions interact over time. There is a wide range of epidemic models that are used in misinformation modeling [17]. The most among these are, SI [18], SIR [19], [20], SIS [21], [22] models. SI (susceptible-infected) models are easy to model and often are the only models that permit analysis with arbitrary graph models. SIR (susceptible-infected-recovered) and SIS (susceptible-infected-susceptible) refine the SI model, making them more accurate without introducing significant computational complexity to simulations. Despite these refinements, SI propagation still finds use due to its simplicity, and due to having behavior that is comparable to SIS and SIR models for the initial phase of misinformation propagation, which is the most critical phase for countering misinformation. Throughout this paper, we use an SI model to estimate misinformation propagation.


This paper is available on arxiv under CC 4.0 license.


L O A D I N G
. . . comments & more!

About Author

THE Tech Editorialist HackerNoon profile picture
THE Tech Editorialist@editorialist
Crafting compelling perspectives in media, sparking thought-provoking discussions on diverse opinions and viewpoints.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
Also published here
Boorghani
X REMOVE AD