paint-brush
Multilingual Coarse Political Stance Classification of Media: Limitations & Ethics Statementby@mediabias

Multilingual Coarse Political Stance Classification of Media: Limitations & Ethics Statement

Too Long; Didn't Read

In this paper, researchers analyze AI-generated news articles' neutrality and stance evolution across languages using authentic news outlet ratings.
featured image - Multilingual Coarse Political Stance Classification of Media: Limitations & Ethics Statement
Media Bias [Deeply Researched Academic Papers] HackerNoon profile picture

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Cristina España-Bonet, DFKI GmbH, Saarland Informatics Campus.

5.1 Limitations

We are assuming that All media sources have an editorial line and an associated bias, and we treat the ILM as any other media source. We do not consider the possibility of a ChatGPT or Bard article being unbiased. This is related to the distant supervision method used to gather the data that currently allows for a binary political stance annotation. Since manually annotating hundreds of thousands of articles with political biases in a truly multilingual setting seems not possible in the foreseeable future, we decided to implement a completely data-based method and study its language and culture transfer capabilities.


Using distant supervision for detecting the political stance at article level is a delicate topic though. First, because the same newspaper can change ideology over time. Second, and this is more related to the content of an individual article, non-controversial subjects might not have a bias. Even in cases where bias exists, there is a spectrum ranging from the extreme Left to the extreme Right, rather than a clear-cut division between the two ideologies.


In order to quantify and if possible mitigate the current limitations, we plan to conduct a stylistic analysis of the human-annotated corpora (Baly et al., 2020; Aksenov et al., 2021) and compare it to our semi-automatically annotated corpus. As a follow-up of this work, we will perform a stylistic analysis of the ILM-generated texts too as a similar style between the training data and these texts is needed to ensure good generalisation and transfer capabilities.

5.2. Ethics Statement

We use generative language models, ChatGPT and Bard, to create our test data. Since we deal with several controversial subjects (death penalty, sexual harassment, drugs, etc.) the automatic generation might produce harmful text. The data presented here has not undergone any human revision. We analyse and provide the corpus as it was generated, along with the indication of the systems version used.