paint-brush
Mitigating Framing Bias with Polarity Minimization Loss: Limitations, Ethics Statement & Referencesby@mediabias
699 reads
699 reads

Mitigating Framing Bias with Polarity Minimization Loss: Limitations, Ethics Statement & References

Too Long; Didn't Read

In this paper, researchers address framing bias in media, a key driver of political polarization. They propose a new loss function to minimize polarity differences in reporting, reducing bias effectively.
featured image - Mitigating Framing Bias with Polarity Minimization Loss: Limitations, Ethics Statement & References
Tech Media Bias [Research Publication] HackerNoon profile picture

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Yejin Bang, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;

(2) Nayeon Lee, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;

(3) Pascale Fung, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology.


6.1. Limitations

The study is limited by its adherence to the benchmark’s English-based task setup. The analysis is constrained to political ideologies in the United States and the English language. Additionally, the BART model’s 1024 sub-token input limit restricts the number of biased source articles that can be included as an input. It is important to note that these limitations, while potentially impacting the scope of the study’s findings, are not uncommon in natural language processing research. Nonetheless, future research may benefit from addressing these limitations by exploring alternative methods for a broader range of political ideologies (NonU.S. political ideologies) and languages, as well as incorporating longer input texts to capture a more comprehensive range of source articles.

6.2. Ethics Statement

The issue of biased articles with framing has been extensively studied, as it can lead to polarization by influencing readers’ opinions toward a certain person, group, or topic. To address this problem, our research focuses on introducing a loss function that can be incorporated to enable the model to reduce framing bias in the generated summary.


However, it is important to recognize that automatic technologies can also have unintended negative consequences if not developed with careful consideration of their broader impacts. For example, machine learning models can introduce bias in their output, replacing known source bias with another form of bias (Lee et al., 2022). To mitigate this risk, Lee et al. (2022) have suggested including explicit mention of the source articles alongside automatically generated neutral summaries. Furthermore, while our work aims to remove framing bias in human-generated articles, there is the potential for hallucination in the generation, which is a well-known problem of generative models (Ji et al., 2023). Thus, it is important to equip a guardrail (e.g., a provision of source reference) if such automatic technology is implemented for actual use cases.


Despite these challenges, our research can contribute to the effort of mitigating human-generated framing bias in order to reduce polarization in society. One of the use cases can be to aid human experts in the process of providing multi-view synthesized articles without framing bias. In terms of broader societal impact, we hope our work can help online users access more depolarized information online.

6.3. References

2021. Center – what does a "center" media bias rating mean?


Ramy Baly, Giovanni Da San Martino, James Glass, and Preslav Nakov. 2020. We can detect your bias: Predicting the political ideology of news articles. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4982–4991, Online. Association for Computational Linguistics.


Adriana Beratšová, Kristína Krchová, Nikola Gažová, and Michal Jirásek. 2016. Framing and bias: a literature review of recent findings. Central European journal of management, 3(2).


Dennis Chong and James N Druckman. 2007. Framing theory. Annu. Rev. Polit. Sci., 10:103–126.


Robert M Entman. 2002. Framing: Towards clarification of a fractured paradigm. McQuail’s Reader in Mass Communication Theory. London, California and New Delhi: Sage.


Robert M Entman. 2007. Framing bias: Media in the distribution of power. Journal of communication, 57(1):163–173.


Robert M Entman. 2010. Media framing biases and political power: Explaining slant in news of campaign 2008. Journalism, 11(4):389–408.


Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749


Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, and Lu Wang. 2019. In plain sight: Media bias through the lens of factual reporting. arXiv preprint arXiv:1909.02670.


Matthew Gentzkow and Jesse M Shapiro. 2006. Media bias and reputation. Journal of political Economy, 114(2):280–316.


Matthew Gentzkow, Jesse M Shapiro, and Daniel F Stone. 2015. Media bias in the marketplace: Theory. In Handbook of media economics, volume 1, pages 623–645. Elsevier.


Erving Goffman. 1974. Frame analysis: An essay on the organization of experience. Harvard University Press


Felix Hamborg, Karsten Donnay, and Bela Gipp. 2019. Automated identification of media bias in news articles: an interdisciplinary literature review. International Journal on Digital Libraries, 20(4):391–415.


Felix Hamborg, Norman Meuschke, and Bela Gipp. 2017. Matrix-based news aggregation: exploring different news perspectives. In 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 1–10. IEEE.


Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Comput. Surv., 55(12).


Daniel Kahneman and Amos Tversky. 2013. Prospect theory: An analysis of decision under risk. In Handbook of the fundamentals of financial decision making: Part I, pages 99–127. World Scientific.


Philippe Laban and Marti A Hearst. 2017. newslens: building and visualizing long-ranging news stories. In Proceedings of the Events and Stories in the News Workshop, pages 1–9.


Nayeon Lee, Yejin Bang, Tiezheng Yu, Andrea Madotto, and Pascale Fung. 2022. NeuS: Neutral multi-news summarization for mitigating framing bias. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3131–3148, Seattle, United States. Association for Computational Linguistics.


Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.


Yujian Liu, Xinliang Frederick Zhang, David Wegsman, Nicholas Beauchamp, and Lu Wang. 2022. POLITICS: Pretraining with same-story article comparison for ideology prediction and stance detection. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1354–1374, Seattle, United States. Association for Computational Linguistics.


Saif Mohammad. 2018. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 174–184.


Fred Morstatter, Liang Wu, Uraz Yavanoglu, Stephen R Corman, and Huan Liu. 2018. Identifying framing bias in online news. ACM Transactions on Social Computing, 1(2):1–18.


Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.


Souneil Park, Seungwoo Kang, Sangyoung Chung, and Junehwa Song. 2009. Newscube: delivering multiple aspects of news to mitigate media bias. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 443–452.


Dietram A Scheufele. 2000. Agenda-setting, priming, and framing revisited: Another look at cognitive effects of political communication. Mass communication & society, 3(2-3):297–316.


All Sides. 2018. Media bias ratings. Allsides.com.


Timo Spinde, Christina Kreuter, Wolfgang Gaissmaier, Felix Hamborg, Bela Gipp, and Helge Giese. 2021. Do you think it’s biased? how to ask for the perception of media bias. In 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 61–69. IEEE.


Esther van den Berg and Katja Markert. 2020. Context in informational bias detection. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6315–6326, Barcelona, Spain (Online). International Committee on Computational Linguistics.


George Wright and Paul Goodwin. 2002. Eliminating a framing bias by using simple instructions to ‘think harder’and respondents with managerial experience: Comment on ‘breaking the frame’. Strategic management journal, 23(11):1059–1067.


Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.


Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.


Yifan Zhang, Giovanni Da San Martino, Alberto BarrónCedeno, Salvatore Romeo, Jisun An, Haewoon Kwak, Todor Staykovski, Israa Jaradat, Georgi Karadzhov, Ramy Baly, et al. 2019b. Tanbih: Get to know what you are reading. EMNLP-IJCNLP 2019, page 223.