Hackernoon logoEvidence That AI Will Soon Pass the Turing Test (or maybe it already has) by@diego-lopez-yse

Evidence That AI Will Soon Pass the Turing Test (or maybe it already has)

Diego Lopez Yse Hacker Noon profile picture

@diego-lopez-yseDiego Lopez Yse

Using AI & Data Science to create real social impact. Working in AI? Let's get in touch

You might be wondering if machines are a threat to the world we live in, or if they’re just another tool in our quest to improve ourselves. If you think that AI is just another tool, you might be surprised to hear that some of the biggest names in technology have a clear concern for it. As Mark Ralston wrote, “The great fear of machine intelligence is that it may take over our jobs, our economies, and our governments”.

If you disagree with this idea, that’s OK, because I didn’t write the previous paragraph. An Artificial Intelligence (AI) solution did. I used a tool called GPT-2 to synthetically generate that text, just by feeding it with the subtitle of this article. Looks pretty human, doesn’t it?


(Using GPT-2, you can synthetically generate text (highlighted in blue) just by providing an initial input (marked in red). Source: Transformer Hugging Face)

GPT-2 is a text-generation system launched by OpenAI (an AI company founded by Elon Musk) that has the ability to generate coherent text from minimal prompts: feed it a title, and it will write a story, give it the first line of a poem and it’ll supply a whole verse. To explore some of its capabilities, take a look at Fake Fake News, a site that uses GPT-2 to generate satire news articles in categories like politics, sports or entertainment.

But the big breakthrough happened this year when OpenAI launched the next generation of GPT-2 (called GPT-3): a tool that is so advanced, that it can figure out how concepts relate to each other, and discern context.

From an architecture perspective, GPT-3 is not an innovation at all: it simply takes a well-known approach from machine learning like artificial neural networks, and trains them with data from the internet. The real novelty comes from its massive size: with 175 billion parameters, it’s the largest language model ever created, trained on the largest dataset of any language model.


(Example of GPT-3 to create an email message. Source: WT.Social)

Having the ability to be ‘re-programmed’ for general tasks with very little fine-tuning, GPT-3 seems to be able to do just about anything by conditioning it with a few examples: you can ask it to be a translator, a programmer, a poet, or a famous author, and it can do it with fewer than 10 training examples.

If you’re interested in knowing its performance, The Guardian proved it could synthetically write a whole news article based on an initial statement, taking less time to edit than many human articles.

More than words

Take a look at the image below. What do you think of this apartment? Would you consider renting it?:


Looks good, right? Well, there’s one minor detail, and it’s that the place doesn’t exist. The whole publication was made by an AI and is not real. None of the pictures, nor the text, came directly from the real world.

The listing titles, the descriptions, the picture of the host, even the pictures of the rooms. Trained with millions of pictures of bedrooms, millions of pictures of people, and hundreds of thousands of Airbnb listings, the AI solution from thisrentaldoesnotexist.com was able to create this result. You can try it yourself if you want.

These fake images were produced using Generative Adversarial Networks (GANs for short), which are artificial neural networks capable of producing new content. GANs are an exciting innovation in AI which can create new data instances that resemble the training data, widely used in image, video and voice generation.


GANs contain a “generator” neural network and a “discriminator” neural network which interact in the following way:

The generator produces fake data samples to mislead the discriminator, while the discriminator tries to determine the difference between the fake and real data, evaluating them for authenticity.

By iterating through this cycle the goal is that the two networks get better and better until a generator that produces realistic outputs is created (generating plausible examples). Because the discriminator “competes” against the generator, the system as a whole is described as “adversarial”.


(Example of a GAN training process. At the end, the distributions of the real (in green) and fake samples (in purple) converge. Source: GAN Lab)

Also, GANs have some special capabilities: the data used for training them doesn’t need to be labelled (as the discriminator can judge the output of the generator based entirely on the training data itself), and adversarial networks can be used to efficiently create training datasets for other AI applications. GANs are definitely one of the most interesting concepts in modern AI, and we will see more exciting applications in the coming years.

Final thoughts

I know it’s shocking, but there’s no reason to be scared (at least not yet) by these technologies. None of the examples provided are magical, and they are the result of scientific research that can be explained and understood. Above all, although some AI outputs can give all the appearance of being “intelligent”, they are still very far away from any human cognition process.

GPT-3 possesses no internal representation of what words actually mean, and lacks the ability to reason abstractly. Also, it can lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs. GPT-3 is a revolutionary text predictor, but not a threat to human kind.

On the other hand, GANs need a wealth of training data to get started: without enough pictures of human faces, a GAN won’t be able to come up with new faces. They also frequently fail to converge and can be really unstable, since a good synchronization is required between the generator and the discriminator; and once a model is generated, it lacks the generalization capabilities to tackle different types of problems. They can also have problems counting, understanding perspective and recognizing global structures.

No single breakthrough will completely change the world we live in, but we’re witnessing such a massive change in the way we interact with technology that we should prepare ourselves for the world to come. My suggestion is: learn about these technologies. It will ease your way across these extraordinary times.

Interested in these topics? Follow me on Linkedin or Twitter

Also published behind a paywall at https://medium.com/ai-in-plain-english/ai-has-become-so-human-that-you-cant-tell-the-difference-d62ed2f22775


Join Hacker Noon

Create your free account to unlock your custom reading experience.