Characterizing Cable News Bias: Related Research You Should Know Aboutby@mediabias
532 reads
532 reads

Characterizing Cable News Bias: Related Research You Should Know About

tldt arrow

Too Long; Didn't Read

Cable news channels have been shown to have an impact on the opinions and political behavior of voters. Less than 16 percent of the public has a “great deal" or “quite a lot" of trust in television news. This popular perception of cable news bias further invites a more rigorous characterization of bias on TV.
featured image - Characterizing Cable News Bias: Related Research You Should Know About
Media Bias [Deeply Researched Academic Papers] HackerNoon profile picture

This paper is available on Arxiv under CC 4.0 license.


(1) Seth P. Benson, Carnegie Mellon University (e-mail: [email protected]);

(2) Iain J. Cruickshank, United States Military Academy (e-mail: [email protected])

Abstract and Intro

Related Research




Conclusion and References


Despite the rise of the internet as a means of news consumption, cable news remains a prominent source of news and political information for the American public. Since 2006, daytime cable news viewership has seen a consistent increase alongside the entrance of primetime cable news as a major pull of viewers [2].

Cable news channels have also been shown to have an impact on the opinions and political behavior of voters. [3] Taken together, this information provides justification for the study of cable news, which continues to hold a great deal of attention and influence in modern mass media. If, as previous research indicates, cable news has a tangible effect on viewers’ political behavior, then understanding the biases present in the medium is especially important.

Media, including cable news, is broadly perceived as having political biases. Through the use of "expert reviewers" and public surveys, firms have assembled political bias ratings for news sources. [4] They have found that most media sources, especially cable news sources, have a political lean to their coverage.

This conclusion matches the view of the public. Americans’ trust in the media to “get the facts straight" and “deal fairly with all sides" has been steadily declining. [5], [6] Cable news particularly has a poor perception, with less than 16 percent of the public having a “great deal" or “quite a lot" of trust in television news.

This popular perception of cable news bias further invites a more rigorous characterization of bias on TV. However, current assessments of cable news bias based on "expert reviewers" or public polling are limited by their subjective nature. Ultimately, these methods rely on human assessment, which impairs truly objective determinations.


Social science has taken several approaches to examine bias in media and cable news specifically. The first of these is through looking at media gatekeeping bias, or bias in the process of determining what or whom to cover [7]. On cable news, one way this can be assessed is through the figures that are brought on to be guests on shows. Some researchers have approached cable news bias by assessing the individual ideology scores of cable news speakers.

One approach to this is using publicly available political donation data to map the ideological ideal points of individuals [8]. When analyzing the individuals that appeared on cable news, researchers found that cable news stations have a high amount of partisan bias in their guests, especially in primetime news slots [9]. Another way to map gatekeeping bias in cable news is by analyzing the appearance of congressmembers.

For each congressmember, ideological ideal points can be calculated through spatial analysis of congressional voting data. [10] Using these ideological ideal points, research has found that ideologically extreme members are over-represented on cable news. [7], [11] The media sources discussed on cable news can also reflect bias.

Mapping sources cited in cable news reveals that channels of different partisan leans have distinct networks of news sources with little overlap, and an analysis of the think tanks cited on different cable news networks has demonstrated a preference for think tanks associated with the channels’ partisan leans. [12], [13]

Social science literature has also made attempts to analyze language in news media, although they are oftentimes computationally limited. A large amount of media bias analysis in social science has been qualitative in nature, which allows for maximum interpretation without restricting the analysis to specific methodological techniques [14]. From 1990 to 2005, qualitative analysis made up nearly half of social science media frame studies [15].

Codebooks are one of the most widely used forms of quantitative analysis. One way codebooks are utilized is in manual coding, in which blind participants follow explicit rules to evaluate whether media content expresses certain frames toward subjects for several clips or articles [16]. Computer-assisted coding has also been completed through automated counts of keywords and supervised learning through training statistical models with previously coded content [17], [18].

One other approach has been centering resonance analysis, a form of networkbased text analysis that "characterizes large sets of texts by identifying the most important words that link other words in the network" [19]. But, understanding the significance of the words identified in this method still requires human interpretation. While many social science studies leverage computational assistance, most are still dependent on human interpretation of text and leverage techniques that are wellsuited only for a limited scope of analysis.


Hamborg et al. identify three main types of bias in news production: fact selection, writing style, and presentation style [20]. Fact selection is strongly related to the gatekeeping bias in social science literature. Presentation style relates to the visual bias of news presentation, including the bias in picture selection identified in social science.

Finally, there is the bias introduced by the actual words and writing style used, or the writing style bias. Writing style bias is the the word choice — or lexical bias — and framing biases in the text; a news piece can be biased by both the words, or phrases, it chooses to use as well as the context of keywords in the text [20], [21].

Framing involves the language associated with different terms or issues, which is meant to inspire a particular effect in the consumer of that language. Through the analysis of hubs in formal mental networks, co-occurances of words within texts can be used to describe the viewpoints of text authors [22]. These networks perform dependency parsing to identify the syntactic relationship between words alongside their emotional perception and have been extended to understanding how media sources frame topics and figures [23].

Additionally, moral framing, determined by a supervised learning coding model, can describe the moral perspectives present in different sources’ coverage (ex: Injustice) and demonstrate differences in moral perspectives between liberal and conservative sources. [24]. Finally, a recent sub-class of framing is Information Bias, which is the conveyance of side information about the main event in the text in order to frame that main event in a certain way for the reader [25], [26].

Taken together, the framing of information, through various means, is an important determinant in the bias of a news piece and the effect the piece is meant to have on a consumer of the news piece.

Sentiment and affective analysis refer to computational practices that examine how positive or negative statements are towards their subjects. One way this can be done is through coded analysis of word affect scores. By coding the positive or negative affect of over 2,258 words and applying the developed lexicon to words surrounding key names, researchers have been able to determine the press favorability of political figures or candidates [27].

Recently, more advanced techniques have been explored. When controlling for topic by selecting articles discussing the same issue, Natural Language Tool Kit’s VADER Sentiment Intensity Analyzer has been used to infer media source bias [28]. The tool can identify the sentiment and political tone of different articles, which allows researchers to determine reporting differences between news organizations [28].

Building off of this concept of press favorability, one combination of sentiment with framing was done by analyzing the affective scores of political news articles in relation to the partisan lean of the political figures they mention [29]. This model allowed for bias identification that improved user bias awareness in an experimental setting.

But, the increased number of speakers and issues discussed on cable news makes this model less equipped to analyze them. Additionally, limiting the analysis to just the sentiment towards political actors can miss important factors of bias like ideology.

However, despite the benefits of sentiment and the recent research into computational tools for sentiment classification, sentiment alone has limited utility in understanding more contextual attitudes and opinions, like stance. Stance detection entails the automated prediction of an author’s viewpoint or stance towards a subject of interest, often referred to as the "target" [30].

Typically, a stance towards a subject is categorized as "Agree", "Disagree", or "Neutral". However, the labels representing stance can vary based on the specific target or context. Essentially, a stance mirrors an individual’s perspective toward a specific topic or entity.

Because stance inherently requires context in order to classify, stance detection remains a challenge for computational tools, especially those relying on keywords or supervised machine learning [31], [32]. Despite these challenges, there are a few, very recent works that show stance classification can be done without labeled data — in an unsupervised or zero-shot setting — in much the same way sentiment classification is currently done [33]–[36].

When considering methods to characterize cable news bias, social science methods can be useful but continue to have a human reliance that limits the potential scope and depends on subjective judgment. Supervised learning methods are also insufficient because we lack objective labels for the bias of transcripts to train on.

Additionally, the use of just frame or sentiment analysis in computational techniques oftentimes requires selecting articles discussing a narrow range of topics. This limitation becomes more apparent when attempting to apply techniques to cable news, which can oftentimes have a broad range of speakers and topics within a single show.

So, this study aims to integrate topic modeling with previously used sentiment analysis techniques to create a more dynamic form of media analysis. Doing so allows for cable news bias to be characterized across topics and for shifts over time to be captured.

This paper is available on Arxiv under CC 4.0 license.