paint-brush
Fears of OpenAI’s Fake Text Generator Are Ungroundedby@araj4wr
231 reads

Fears of OpenAI’s Fake Text Generator Are Ungrounded

by Ara JamFebruary 20th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

On February 14, OpenAI released news about its groundbreaking achievement; an <a href="https://blog.openai.com/better-language-models" target="_blank">AI model that can produce human-like text</a>. Given a few sentences, the model is capable of writing a full page about the topic, without forgetting the context or losing grip of the syntax.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Fears of OpenAI’s Fake Text Generator Are Ungrounded
Ara Jam HackerNoon profile picture

On February 14, OpenAI released news about its groundbreaking achievement; an AI model that can produce human-like text. Given a few sentences, the model is capable of writing a full page about the topic, without forgetting the context or losing grip of the syntax.

But contrary to OpenAI’s tradition, the code was not open sourced. They only released a minified version on Github, in fears that it can be “used to generate deceptive, biased, or abusive language at scale.”

Is that a valid concern? How realistic is deep faked AI content, and how effective is it?

A history of deep fakes

The term “deep fakes” stems from “deep learning,” a subset of machine learning that uses neural networks for pattern detection. The algorithm has proven to be extremely efficient, capable of producing fake videos of Hollywood actors and political figures.

Researchers continued the development, and produced realistic pictures of humans, cars, cats and other animals — none of them existing in the real world. They have also managed to fake moves, making people dance in ways they probably never could.

As the technology was released, it raised many concerns — especially as once the tools went public, ordinary people with no coding knowledge would be able to create believable footages. There were concerns if AI could disrupt U.S. midterm elections, or if the technology would cast a shadow of doubt even on legitimate videos.

So far, there have not been any recording of deep fakes actually being weaponized. The Belgian socialist party sp.a released a clip about Trump in order to promote a petition about climate change but after nearly nine months, the petition got only 2,664 signatures.

The fears of AI’s use in disinformation is not ungrounded though, given Facebook’s Cambridge Analytica scandal in which AI was used to effectively measure how each individual should be treated to behave in a certain way. But to understand the true threats, we must see how disinformation really works.

Disinformation needs more than AI

President Trump did not use disinformation to win the elections. In fact, the way he used AI was similar to what President Obama had used before him. The only difference was that Trump’s team was not transparent on the way they were going to use the data.

A correct example of disinformation is spreading conspiracy theories or fake news, for instance the story of President Emmanuel Macron planning to hand the French regions of Alsace and Lorraine to neighbouring Germany. Another example was when Russia attacked the Ukrainian ships in the Black Sea — the country began a disinformation campaign to control the narrative.

Looking at these examples, it is evident that those campaigns required extensive work and were carried out by governmental bodies. And they did not need AI to succeed.

It’s not enough to have a text generation engine — no matter how clever — to carry out a disinformation campaign. The attack must be widespread, accumulate many accounts and cover as many news channels as possible. It would be naïve to assume the targets of an attack would be diverted with only one piece of news, especially if the truth is out there which can expose the lie.

In the words of Joseph Gobbles: “If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the State can shield the people from the political, economic and/or military consequences of the lie. It thus becomes vitally important for the State to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the State.”

Disinformation campaigns need a committed organization to succeed, and even that success would be temporary.

Another problem with OpenAI’s model (called GPT-2) is that like any other AI system currently developed, it does not understand what it says. It can brilliantly complete a sentence, and it could write the rest of a page given an initial paragraph. But it cannot say anything factual, nor link to any sources. It could make writing easier for trolls, but to succeed in disinformation campaigns it would require many actors to spread those words. And those actors can produce more effective texts on their own, without using GPT-2.

According to OpenAI’s blog, the model can generate “reasonable samples about 50% of the time.” Fine-tuning the model for a specific task would increase that percentage, but it does not convert it to a fully automated system. Humans, on the other hand, produce reasonable samples all the time, and could even copy-paste the same misleading information on multiple outlets with minimum modification.

Another consideration would be misleading product reviews. Again, the AI can produce misleading text, but to be effective it needs to come from many different accounts. AI can make the job easier, but it does not replace malicious actors.

Conclusion

GPT-2 does a brilliant job, and I can’t wait to see it released (whenever that may be). But the malicious actors do not need AI to carry out their nefarious attempts. We are still closer to a human apocalypse than a robot apocalypse.