Concluding Our Characterizing Biases in Cable News Studyby@mediabias
3,971 reads
3,971 reads

Concluding Our Characterizing Biases in Cable News Study

tldt arrow

Too Long; Didn't Read

The primary objective of this paper was to develop a model capable of characterizing the biases of cable news programs given a large volume of text data in the form of transcripts. Our focus was on analyzing gatekeeping bias, which pertains to the topics discussed on cable news programs, and writing style bias, which refers to the language used to discuss these topics.
featured image - Concluding Our Characterizing Biases in Cable News Study
Media Bias [Deeply Researched Academic Papers] HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.


(1) Seth P. Benson, Carnegie Mellon University (e-mail: [email protected]);

(2) Iain J. Cruickshank, United States Military Academy (e-mail: [email protected])

Abstract and Intro

Related Research




Conclusion and References


The primary objective of this paper was to develop a model capable of characterizing the biases of cable news programs given a large volume of text data in the form of transcripts. Our focus was on analyzing gatekeeping bias, which pertains to the topics discussed on cable news programs, and writing style bias, which refers to the language used to discuss these topics.

To achieve this, we dissected individual transcripts using Named Entity Recognition and Few-Shot Stance Detection, before employing Spectral Embedding and Clustering to group similar programs. Our results largely conformed to common expectations about cable news: cable news programs exhibit consistent biases that generally align with other programs on their network.

Future research could leverage different models or prompting techniques to find improved ways to identify the stance of cable news text towards topics. Beyond our model, there are also other dimensions of bias that merit investigation.

For instance, integrating our model with work done on visual bias in television could potentially enhance its ability to characterize bias in cable news [46]. Future work could also aim to examine a broader time range. This could reveal changes in cable news programs over time and potentially identify years that do not exhibit the consistent network-driven clusters we identified in the 2020 data.


[1] Barry A Hollander. Tuning out or tuning elsewhere? partisanship, polarization, and media migration from 1998 to 2006. Journalism & Mass Communication Quarterly, 85(1):23–40, 2008.

[2] 1615 L. St NW, Suite 800 Washington, and DC 20036 USA202-419-4300 | Main202-857-8562 | Fax202-419-4372 | Media Inquiries. Cable News Fact Sheet.

[3] Gregory J. Martin and Ali Yurukoglu. Bias in Cable News: Persuasion and Polarization. American Economic Review, 107(9):2565–2599, September 2017.

[4] AllSides Media Bias Chart, February 2019.

[5] Tim Groeling. Media Bias by the Numbers: Challenges and Opportunities in the Empirical Study of Partisan News. Annual Review of Political Science, 16(1):129–151, 2013. _eprint:

[6] Gallup Inc. Americans’ Confidence in Major U.S. Institutions Dips, July 2021. Section: Politics.

[7] Jeremy Padgett, Johanna L Dunaway, and Joshua P Darr. As Seen on TV? How Gatekeeping Makes the U.S. House Seem More Extreme. Journal of Communication, 69(6):696–719, December 2019.

[8] Adam Bonica. Avenues of influence: On the political expenditures of corporations and their directors and executives. Business and Politics, 18(4):367–394, 2016.

[9] Eunji Kim, Yphtach Lelkes, and Joshua McCrain. Measuring dynamic media bias. Proceedings of the National Academy of Sciences, 119(32):e2202197119, August 2022. Publisher: Proceedings of the National Academy of Sciences.

[10] Keith T Poole and Howard Rosenthal. On party polarization in congress. Daedalus, 136(3):104–107, 2007.

[11] Mark D. Harmon and Daniel J. Foley. Meet the Press Congressional Guests, 1947-2004. Electronic News, 1(2):121–133, May 2007. Publisher: SAGE Publications Inc.

[12] Bethany Anne Conway-Silva, Jennifer N. Ervin, and Kate Kenski. “Reliable Sources” in Cable News: Analyzing Network Fragmentation in Coverage of Reform Policy. Journalism studies, 21(6):838–856, 2020.

[13] Tim Groseclose and Jeffrey Milyo. A Measure of Media Bias*. The Quarterly Journal of Economics, 120(4):1191–1237, November 2005.

[14] Margrit Schreier. Qualitative content analysis in practice. Sage publications, 2012.

[15] Jörg Matthes. What’s in a Frame? A Content Analysis of Media Framing Studies in the World’s Leading Communication Journals, 1990-2005. Journalism & Mass Communication Quarterly, 86(2):349–367, June 2009. Publisher: SAGE Publications Inc.

[16] Catie Snow Bailard. Corporate ownership and news bias revisited: Newspaper coverage of the supreme court’s citizens united ruling. Political Communication, 33(4):583–604, 2016.

[17] Dustin Hillard, Stephen Purpura, and John Wilkerson. Computer-assisted topic classification for mixed-methods social science research. Journal of Information Technology & Politics, 4(4):31–46, 2008.

[18] Justin Grimmer and Brandon M Stewart. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political analysis, 21(3):267–297, 2013.

[19] Zizi Papacharissi and Maria de Fatima Oliveira. News Frames Terrorism: A Comparative Analysis of Frames Employed in Terrorism Coverage in U.S. and U.K. Newspapers. The International Journal of Press/Politics, 13(1):52–74, January 2008.

[20] Felix Hamborg, Karsten Donnay, and Bela Gipp. Automated identification of media bias in news articles: an interdisciplinary literature review. International Journal on Digital Libraries, 20(4):391–415, December 2019.

[21] Samantha D’Alonzo and Max Tegmark. Machine-learning media bias. Plos one, 17(8):e0271947, 2022.

[22] Massimo Stella. Forma Mentis Networks Reconstruct How Italian High Schoolers and International STEM Experts Perceive Teachers, Students, Scientists, and School. Education Sciences, 10(1):17, January 2020. Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.

[23] Alfonso Semeraro, Salvatore Vilella, Giancarlo Ruffo, and Massimo Stella. Emotional profiling and cognitive networks unravel how mainstream and alternative press framed AstraZeneca, Pfizer and COVID-19 vaccination campaigns. Scientific Reports, 12(1):14445, August 2022.

[24] Usman Shahid, Barbara Di Eugenio, Andrew Rojecki, and Elena Zheleva. Detecting and understanding moral biases in news. In Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, pages 120–125, Online, 2020. Association for Computational Linguistics.

[25] Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, and Lu Wang. In plain sight: Media bias through the lens of factual reporting. arXiv preprint arXiv:1909.02670, 2019.

[26] Shijia Guo and Kenny Q Zhu. Modeling multi-level context for informational bias detection by contrastive learning and sentential graph network. arXiv preprint arXiv:2201.10376, 2022.

[27] Gregory Grefenstette, Yan Qu, James G Shanahan, and David A Evans. Coupling Niche Browsers and Affect Analysis for an Opinion Mining Application. Proceedings of 12th International Conference on Rech. d’Information Assistee par Ordinateur, 2004.

[28] Anuska Acharya and Grace Cox. Sentiment Analysis and NLP models for Identifying Biases of Online News Stations. 2021.

[29] Felix Hamborg, Kim Heinser, Anastasia Zhukova, Karsten Donnay, and Bela Gipp. Newsalyze: Effective Communication of Person-Targeting Biases in News Articles. In 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 130–139, Champaign, IL, USA, September 2021. IEEE.

[30] Nora Alturayeif, Hamzah Luqman, and Moataz Ahmed. A systematic review of machine learning techniques for stance detection and its applications. Neural Computing and Applications, 35(7):5113–5144, 2023.

[31] Lynnette Hui Xian Ng and Kathleen M Carley. Is my stance the same as your stance? a cross validation study of stance detection datasets. Information Processing & Management, 59(6):103070, 2022.

[32] Emily Allaway and Kathleen McKeown. Zero-shot stance detection: Paradigms and challenges. Frontiers in Artificial Intelligence, 5:1070429, 2023.

[33] Chandreen Liyanage, Ravi Gokani, and Vijay Mago. Gpt-4 as a twitter data annotator: Unraveling its performance on a stance classification task. 2023.

[34] Mark Mets, Andres Karjus, Indrek Ibrus, and Maximilian Schich. Automated stance detection in complex topics and small languages: the challenging case of immigration in polarizing news media. arXiv preprint arXiv:2305.13047, 2023.

[35] Bowen Zhang, Daijun Ding, and Liwen Jing. How would stance detection techniques evolve after the launch of chatgpt? arXiv preprint arXiv:2212.14548, 2022.

[36] Iain J Cruickshank and Lynnette Hui Xian Ng. Use of large language models for stance classification. arXiv preprint arXiv:2309.13734, 2023.

[37] Seth Benson and Scott Limbocker. Campaigning through cable: Examining the relationship between cable news appearances and house candidate fundraising. American Politics Research, page 1532673X231175675, 2023.

[38] EntityRecognizer · spaCy API Documentation.

[39] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003.

[40] Maarten Grootendorst. Bertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint arXiv:2203.05794, 2022.

[41] OpenAI. Gpt-4 technical report, 2023.

[42] C. Hutto and Eric Gilbert. VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. Proceedings of the International AAAI Conference on Web and Social Media, 8(1):216–225, May 2014. Number: 1.


[44] Andrew Ng, Michael Jordan, and Yair Weiss. On Spectral Clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems, volume 14. MIT Press, 2001.

[45] Bin Luo, Richard C Wilson, and Edwin R Hancock. Spectral embedding of graphs. Pattern recognition, 36(10):2213–2230, 2003.

[46] Renita Coleman and Stephen Banning. Network TV News’ Affective Framing of the Presidential Candidates: Evidence for a Second-Level Agenda-Setting Effect through Visual Framing, 2006.

This paper is available on Arxiv under CC 4.0 license.