paint-brush
Uncovering Gender Bias within Journalist-Politician Interaction in Indian Twitter: Resultsby@mediabias
491 reads
491 reads

Uncovering Gender Bias within Journalist-Politician Interaction in Indian Twitter: Results

Too Long; Didn't Read

In this paper, researchers analyze gender bias in Indian political discourse on Twitter, highlighting the need for gender diversity in social media.
featured image - Uncovering Gender Bias within Journalist-Politician Interaction in Indian Twitter: Results
Tech Media Bias [Research Publication] HackerNoon profile picture

This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.

Authors:

(1) Brisha Jain, Independent researcher India and [email protected];

(2) Mainack Mondal, IIT Kharagpur India and [email protected].

5. RESULTS

5.1. Gender bias in interaction frequency and popularity of journalist-politician interactions (RQ1)

In order to explore the answer to the first research question, we started with checking if there is a gender bias in the frequency of interaction (i.e., frequency of mentions) between politicians and journalists on Twitter.


Male Politicians are more frequently mentioned by journalists: Figure 1a compares the CDF of the number tweets posted by journalists mentioning male and female politicians. We make an interesting observation from this figure. When the receiving politician is male (i.e., in MJ-MP and FJ-MP categories), the number of mentioned tweets (and hence the frequency of journalist-politician interaction) is higher when female politicians are at the receiving end. To that end, a Kruskal-Wallis test among the number of tweets per journalist within the four categories revealed quite statistically significant differences across categories (p << 0.05). Then we performed pairwise Mann-Whitney tests for following up among the four categories (MJ-MP, MJ-FP, FJ-MP, FJ-FP). There is no statistically significant difference when either a Male or Female journalist mentions Male politician accounts. Similarly, there is no statistically significant difference when either a Male or Female journalist mentions a Female politician account. However, there are statistically significant differences in how frequently Male/Female journalists mention Male politicians vs. how frequently they mention Female politicians (all p << 0.05). Next we compare the popularity per tweet directed toward Male vs. Female politicians.



Table 2: Sample excerpts of # of tweets posted by journalists mentioning politicians. We show tweets from four differentcategories based on the genders of sender and receiver.


(d)


vely). Our observation implies that Twitter users in India seem to ascribe greater credibility to the views on male journalists on female politicians compared to the views of female journalists on female politicians. These observations hold for “retweets” as well. Overall, our popularity analysis of these four categories of tweets reveals that while journalists do not suffer from explicit bias in their interactions with politicians, there is evidence to support the existence of gender bias in the amount of interest these interactions generate from active Twitter users.

5.2. Gender bias within the content of journalist-politician tweets (RQ2)

In the last section, our analysis showed a significant bias towards the Male politicians from both male and female journalists—the tweets mentioning male politicians are more frequent as well as more popular. However, to that end, we checked if the content of these tweets might be responsible for this bias. Specifically, we checked the emotion and the topic of tweets written by male / female journalists and directed towards male / female politicians.


5.2.1. Emotion analysis. : We used the TweetNLP tool for detecting the emotions of the tweets for each category [6]. TweetNLP provides a diachronic large-language model (TimeLMs) based approach for detecting emotion, specifically from multi-lingual tweets. The goal of this analysis is to determine if there are significant differences in the emotional scores of tweets—if there is that might indicate a gender bias inherently in the tweets based on the gender of sender and receiver. We considered four main emotions: anger, joy, optimism, and sadness and each tweet in each of the four categories were assigned for emotion score along these dimensions. Then we performed a Kruskal-Wallis test to identify if any of the emotions were different across the four categories (MJ-MP, MJ-FP, FJ-MP, FJ-FP). We found that the p-value for each of the four tests (one for each emotion) ranged from 0.16 to 0.99, hinting at no statistically significant difference within the emotions of the tweets.


5.2.2. Topic analysis. : To dig further, we performed a topic analysis of the tweets (using Latent Dirichlet Allocation or LDA) collected across four categories. The goal was to check if the topics of the tweets changed based on the gender of the sender or receiver. As described in Section 4 we identified the optimal number of topics (which are essentially clusters of words) for each category and identified the most significant five words for each topic using the LDA algorithm. For each of the four categories, the optimal number of topics were thirteen. Next, we identified the thirteen topics using the LDA algorithm for each category of tweets and performed a significant word analysis for the detected topics. Specifically, for each category of tweets we selected the topics (e.g., topics from MJ-MP) and picked the significant words representing each topic. Then for each topic we checked if those words also occured in the topics detected from other categories of tweets (if found it will signify that words representing topics are also present in topics detected from other categories of tweets). For each of the four categories of tweets on average 81.5% to 93.8% significant words (representing the topics) occur in topics detected from tweets of other categories.


This analysis supports our observation from emotion analysis— the content of tweets across those four categories are the same. However, still the tweets directed towards male politicians attract more interaction compared to the tweets directed towards female politicians. Next we explore a potential reason for this gender bias.

5.3. Potential reason for the gender bias

5.3.1. Inherent Gender Bias in Indian Twitter: We checked a simple statistic regarding top politicians—how many of the most popular politicians (based on the number of Twitter followers) are male vs. they are female. To that end, we leveraged our dataset of top politicians and checked the gender of top 85 politicians (whose Twitter accounts are also part of this study). This analysis uncovered an unsettling gender imbalance among the top politicians—out of 85 top politicians 58 are male and 26 are female. Thus, popular male politicians are almost twice in number compared to popular female politicians. We postulate that this inequality is one of the key reasons behind our observed phenomenon of male politicians attracting significantly more interaction from the general public as well as journalists.


In fact, this inequality reflects a systemic bias deeply ingrained in society. This gender disparity extends its influence even to the realm of Twitter, where male politicians tend to garner a larger number of followers than their female counterparts. This phenomenon isn’t isolated; it permeates various sectors, as illustrated by the dominance of men in top positions across industries. In corporate boardrooms, technology firms, and the entertainment sector, leadership roles are predominantly occupied by men. This systematic bias, rooted in societal norms, is further reinforced by the strong correlation between social capital and the attainment of positions of power. Consequently, popularity on Twitter serve as a stark reflection of this intrinsic bias. Addressing these disparities is paramount for fostering gender equality and dismantling deeply entrenched biases in society.