paint-brush
Keeping Speech Free: Tools to Improve US Digital Media Literacy and Critical Thinkingby@garrypaxinos
234 reads

Keeping Speech Free: Tools to Improve US Digital Media Literacy and Critical Thinking

by Garry M. PaxinosDecember 20th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Free speech is a pillar stone of democracy, which makes digital media literacy essential for a healthy society. People need to be continually educated on how to think critically about the information they encounter. The safety, security, health, and general well-being of people and communities depends on having digital media Literacy. Technology to identify bias and misinformation can transform media literacy culture in the US with readily available resources, such as technology and education. We are now seeing more and more students on media literacy at school to defend against disinformation.

People Mentioned

Mention Thumbnail
featured image - Keeping Speech Free: Tools to Improve US Digital Media Literacy and Critical Thinking
Garry M. Paxinos HackerNoon profile picture

Free speech is a pillar stone of democracy, which makes digital media literacy essential for a healthy society. Yet the U.S. ranks 15th out of 44 countries in areas that suggest good media literacy instruction, lagging behind several other less-developed nations. But what are we referring to when it comes to digital media literacy? And why is critical thinking important?


The phrase ‘digital media literacy’ seeks to recognize three main concepts:

  1. Misinformation: False information shared without malicious intent
  2. Malinformation: Information rooted in fact but used out of context to harm
  3. Disinformation: False information deliberately created to manipulate or harm an individual


This kind of information dissemination can erode public trust in our political system and its institutions. Therefore, the safety, security, health, and general well-being of people and communities depends on having digital media literacy.


Whether we like it or not, the amount of information we are exposed to daily has expanded due to the proliferation of social media and applications. People need to be continually educated on how to think critically about the information they encounter.


Let’s look at how a mix of technology, responsibility and ownership, and education can transform digital media literacy culture in the US with readily available resources.


Technology to identify bias and misinformation

Research has typically concluded that the news often slants toward negative stories. We live in fear-coerced societies. In large part, the idea that the newspaper's front page carries the worst information from that day throughout the world is primarily true. The front page is only newsworthy for the most eye-catching events, including brutal murders, pandemics, natural disasters, and corruption. Media manipulation means that any political bias, on the left or right, is perpetually underscored by this pervasive prejudice for sensationalized negativity.


While there is certainly space for improvement, several businesses have made significant progress in spotting media bias. Recent testing by The Bipartisan Press of multiple Natural Language Processing (NLP) models revealed that Facebook's RoBERTa was among the best ones at the time. It evolved from Google's Bidirectional Encoder Representations from Transformers (BERT), which teaches the computer to recognize intentionally hidden material in samples of unannotated English.


The Bipartisan Press attempts to use software to categorize bias by website domain. Using the software, researchers could infer that CNN and the New York Times tilt to the left, while Fox News and the Washington Examiner lean to the right. However, the software, and others like it, are subject to the political biases of their training data. Subjective labeling of training data will swing the models in accordance with biases of the model developers.


So far, this kind of analysis is not widely available to the public, but there are more applications that could have a mainstream benefit. There are various AI engines that currently the following metrics into their assessment:

  1. Sentiment analysis: A journalist's positivity or negativity to the overall news material or the specific issue they write about
  2. Opinion analysis: Personal sentiments, perspectives, beliefs, or judgments in a journalist's writing
  3. Revision analysis: An investigation into the evolution of a news story and its manipulation of opinion and sentiment over time).
  4. Propaganda analysis: Detecting up to 18 possible persuasion techniques to identify potential disinformation.


Propaganda analysis is critical in assessing article bias. Current tools examine the overall article as well as sentence-by-sentence analysis. In addition, one also needs to look at the number of revisions and the change in metrics of those revisions, to fully understand the intent of the authors, editors, and publications.


Education for media literacy

We are now seeing more and more students educated on media literacy at school to teach them how to defend against disinformation. Some schools have introduced the idea of students—many of whom are approaching voting age—spending up to two weeks learning about how viewpoints, biases, and lies might hide in the many sources of information they access. They pick up skills like document tracing, validating websites using outside resources, and developing a critical eye for claims made in YouTube videos and by TikTok influencers.


In the current climate, there are always new educational initiatives being implemented. Google and Twitter have both launched pre-bunking campaigns to alert people to typical deception techniques. The real question is how are they determining something is mis/mal/disinformation that requires pre-debunking.  Can fact-checking itself be misused as disinformation?


Information doesn’t need to be shared only maliciously to cause harm. Just because a story is sourced on multiple sites doesn't make it any more reliable or accurate. Misinformation can be just as harmful as mal- or disinformation.


The media responsibility

News organizations often put a small “sponsored content” label on their ads, but to the average viewer, this content looks just like regular news stories. There needs to be a collective effort to brand news stories with clarifications on the type of content—is it an opinion piece, news, or an advertisement? Some media organizations even have staff members writing both news stories and opinion pieces, leading to confusion about what constitutes a real byline.


Another potential issue comes from TV programming switching between ‘hard news’ and more opinionated views at the drop of a hat. This repeatedly pops up on 24-hour cable news channels, which could confuse and mislead a casual viewer. Perhaps the stations could introduce their own form of disclaimers to make it crystal clear, or technology could provide external approval services at scale.


The reality is that we have always been fighting against the tide of mis-, mal-, and disinformation, but it is easier now than ever before to spread propaganda and manipulate information. Technology has opened up more communication between us than we ever could have imagined possible. What is new here in the USA is the organized effort to battle so-called misinformation in contravention of the 1st Amendment.   Instead, what is needed is to show people how they are being manipulated.


Fake news can only be fought by the proliferation of accurate news. Fake news needs to be torn apart with facts, not handwaving. If people disagree, let them debate calmly and sanely in the public sphere. Via technological innovation, media literacy education, and more responsible press coverage, people can be equipped to critically assess and query information in a responsible way. To keep speech free, the US must invest in its digital media literacy and value critical thinking as an asset in the fight against disinformation.