paint-brush
Mitigating Framing Bias with Polarity Minimization Loss: Experimentsby@mediabias
4,261 reads
4,261 reads

Mitigating Framing Bias with Polarity Minimization Loss: Experiments

Too Long; Didn't Read

In this paper, researchers address framing bias in media, a key driver of political polarization. They propose a new loss function to minimize polarity differences in reporting, reducing bias effectively.
featured image - Mitigating Framing Bias with Polarity Minimization Loss: Experiments
Tech Media Bias [Research Publication] HackerNoon profile picture

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Yejin Bang, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;

(2) Nayeon Lee, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;

(3) Pascale Fung, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology.

4. Experiments

4.1. Setup


4.2. Models

Baselines We compare with off-the-shelf multidocument summarization (MDS) models trained on Multi-news dataset (Fabbri et al., 2019) (BARTMULTI (Lewis et al., 2019) and PEGASUSMULTI (Zhang et al., 2019a)) as baselines. Those models have achieved high performance in MDS, which can also be applied in summarizing polarized articles. However, these models do not have any learning about framing bias removal or neutral writing. We also compare with the state-of-theart models (BARTNEUSFT and BARTNEUSFT-T) (Lee et al., 2022) that are fine-tuned with ALLSIDES dataset. BARTNEUSFT is fine-tuned only with articles meanwhile BARTNEUSFT-T additionally leverages titles of each article. We additionally report PEGASUSNEUSFT. Simply fine-tuning may not be effective enough to learn about framing bias. Thus, we will demonstrate how the polarity minimization loss can effectively mitigate framing bias compared to baseline and SOTA models.





4.3. Results



Effective learning with extreme polarities We investigate that polarity minimization between extreme ends (left, right) is more effective than the mixture with a center media outlet. This is because left and right-wing ideologies are the opposite ends that can train models more effectively about extreme ends than center media outlets although center media is not completely free of bias. Qualitative analysis results align with the quantitative measures. For instance, as illustrated in Table 2, the polarity minimized models LR-INFO and LRC-AROUSAL both could summarize with the essential information out of polarized input articles. Especially LR-INFO, the lowest biased model, it could even use a more neutral choice of word (e.g., “protests” instead of “riots” same to target Y).


4.4. Analysis


Table 3: Ablation study: Effect of having only single-directional polarity minimization with LR-INFO model.