paint-brush
How AI is Making it Easier to Spread Fake Newsby@diego-lopez-yse
190 reads

How AI is Making it Easier to Spread Fake News

by Diego Lopez YseSeptember 22nd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Fake news consist of deliberate misinformation under the guise of being authentic news, spread via some communication channel, and produced with a particular objective like generating revenue, promoting or discrediting a public figure, a political movement, an organization, etc. The two main Indian political parties took these tactics to a new scale by trying to influence India’s 900 million eligible voters through creating content in Facebook and spreading it on WhatsApp. The answer is in the way humans process information, and understanding is already believing.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How AI is Making it Easier to Spread Fake News
Diego Lopez Yse HackerNoon profile picture

Is Bitcoin the revolution against unequal economic systems, or a scam and money laundry mechanism? Will artificial intelligence (AI) improve and boost humankind, or terminate our species? These questions present incompatible scenarios, but you will find supporters for all of them. They cannot be all right, so who’s wrong then?

Ideas spread because they are attractive, whether they are good or bad, right or wrong. In fact, the “truth” is just one of the elements used or avoided in order to build any story or idea. There are different interests behind any statement (e.g. economic or sentimental), and messages are issued and received with huge amounts of human bias.

We’re living in the age of fake news. Fake news consist of deliberate misinformation under the guise of being authentic news, spread via some communication channel, and produced with a particular objective like generating revenue, promoting or discrediting a public figure, a political movement, an organization, etc.

During the 2018 national elections in Brazil, WhatsApp was used to spread alarming amounts of misinformation, rumors and false news favoring Jair Bolsonaro. Using this technology, it was possible to exploit encrypted personal conversations and chat groups involving up to 256 people, making these chat groups much harder to spot compared to the Facebook News Feed or Google’s search results.

Last year, the two main Indian political parties took these tactics to a new scale by trying to influence India’s 900 million eligible voters through creating content in Facebook and spreading it on WhatsApp (both parties have been accused of spreading false or misleading information, or misrepresentation online). India is WhatsApp’s largest market (more than 200 million Indians users), and a place where users forward more content than anywhere else in the world.

But these tactics are not only used in the political arena: they are also involved in activities that go from manipulating share prices to attacking commercial rivals with fake customer critics. How can fake news have such an impact? The answer is in the way humans process information.

Understanding is Believing

Baruch Spinoza suggested that all ideas are accepted (i.e. represented in the mind as true) prior to a rational analysis of their veracity, and that some ideas are subsequently unaccepted (i.e. represented as false). In other words, the mental representation of a proposition or idea always has a truth value associated with it, and by default this value is true.

The automatic acceptance of representations would seem evolutionarily prudent since if we had to go around checking every percept all the time we’d never get anything done. Understanding and believing is not a two-stage process that’s independent of each other. Instead, understanding is already believing.

How the Future Looks

Massive amounts of data have given birth to AI systems that are already producing human-like synthetic texts, powering a new scale of disinformation operation. Based on Natural Language Processing (NLP) techniques, several lifelike text-generating systems have proliferated and they are becoming smarter every day.

This year, OpenAI announced the launch of GPT-3, a tool to produce text that is so real, that in some cases it’s nearly impossible to distinguish from human writing. GPT-3 can also figure out how concepts relate to each other, and discern context. Tools like this one can be used to generate misinformation, spam, phishing, abuse of legal and governmental processes, and even fake academic essays.

Deepfakes

Deepfakes relate to technologies that make it possible to create evidence of scenes that never happened through video, photo and audio fakes. These technologies can enable bullying more generally (placing people into compromising scenarios), boost scams (swindling employees into sending money to fraudsters), damage a company’s reputation, or even pose a danger to democracies by putting words in the mouths of politicians.

(Facial reenactment tech manipulates Putin in real time. Source: RT)

But deepfakes have another impressive effect: they make it easier for liars to deny the truth in two ways. First, if accused of having said or done something that they did say or do, liars may generate and spread altered sound or images to create doubt. The second way is simply to denounce the authentic as being fake, a technique that becomes more plausible as the public becomes more educated about the threats posed by deep fakes.

How can we fight this battle?

Arthur Schopenhauer believed that our knowledge of the world is confined to knowledge of appearance rather than reality, and that’s probably right nowadays too. In a world of appearances (being social media one of its icons), it seems nearly impossible to avoid being deceived. But there is always a way to resist.

Fighting fake news is a double-edged sword: on the one side, warning news consumers and promoting tools so they can be aware and challenge the sources of information is a very positive thing, while on the other side, we may be producing news consumers that don’t believe in the power of well-sourced news and mistrust everything. If we follow the latter path, we may achieve a general state of disorientation, with news consumers uninterested or unable to determine the credibility of any news source.

We need technology to fight this battle. AI makes it possible to find words and patterns that indicate fake news in huge volumes of data, and tech companies are already working on it. Google is working on a system that can detect videos that have been altered, making their datasets open source and encouraging others to develop deepfake detection methods.

YouTube declared that it won’t allow election-related “deepfake” videos and anything that aims to mislead viewers about voting procedures and how to participate in the 2020 census.

(A sample of videos from Google’s contribution to the FaceForensics benchmark. Source: Google AI Blog)

As data consumers, we have the conditions to fight back. Daniel Gilbert is a social psychologist who found that people do have the potential for resisting false ideas, but this potential can only be realized when the person has (a) logical ability, (b) a set of true beliefs to compare to new beliefs, and c) motivation and cognitive resources. This means that we can resist false ideas, but also that anyone who lacks any of these characteristics is an easy prey for fake news.

Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in. (Isaac Asimov)

Interested in these topics? Follow me on Linkedin or Twitter

Also published at https://medium.com/datadriveninvestor/the-future-of-fake-news-2093f2652ce6